The AI of Trolling

avatar

A friend linked me an interesting article today about a lawyer who used ChatGPT to supplement a submission for a case her was defending, citing cases that the AI recommended. The only problem was, the cases don't actually exist - the AI made them up. This raises lots of questions in terms of what they are creating and how useful it is, but there is also another interesting side to it, as according to the article, the lawyer questioned the legitimacy of one of the cases in question.

image.png

According to Schwartz, he was "unaware of the possibility that its content could be false.” The lawyer even provided screenshots to the judge of his interactions with ChatGPT, asking the AI chatbot if one of the cases were real. ChatGPT responded that it was. It even confirmed that the cases could be found in "reputable legal databases." Again, none of them could be found because the cases were all created by the chatbot.

Lols. Seems ChatGPT does have a sense of humor after all - it is a troll.

Which got me thinking.

If the AIs are scraping our content and then building an output that is acceptable based on what has been supported by others, isn't it going to very quickly start mimicking the loudest voices on the internet - a fraction of the world population, but the ones who scream the most? These voices tend to be the most polarized and the ones who propel the "point system" of the attention economy, where it isn't about usefulness or accuracy, it is about beating the opposition.

We already have seen how AI usage can lead to biased decisions, like the racist AIs in the US court, and now when it is trawling everything, it is likely to do the same. It is also probably going to give answers that at least on the surface, seem plausible, but won't hold up under a decent sniff test, like the lawyer above found out. This means that it is going to feed us tailored information for us, using the information it has available like our own social media usage and, it is going to source the answers based on what we are likely to accept as correct.

Similar to a tailored Facebook feed, the AI chatbots and similar are going to be identifying who we are (it isn't hard since most require some kind of verification signup), trawl all the information it has on us and then give us answers to our prompts based on what we want to hear, setting up personalized information silos, under the guise of robust information sources. It is like the worst of the news, made personally for us, to tell us whatever story we want to believe at the time. And, as there is no way to see into the sourcing currently at least, there is very little chance that the average person is going to go through the steps to verify the information, as after all, they are using an AI to supplement their own content.

It is like a reverse web of trust.

A web of trust uses multiple information sources to be able to apply a confidence level to data across a network, to say for example, what is the likelihood of eye witness accounts being true. Or whether someone really does have all the experience they say they do on their CV.

AI however doesn't need to show its working a this point, meaning that it can spit out content and it is up to the user to decide whether it is trustworthy or not. However, most users still think that what they are getting out is correct to at least a general level, since it is meant to be using reputable sources for its base. This is obviously not true though, since it is pushing out a lot of incorrect content also, but who's checking?

Only the people who have specific expertise are likely able to really point out a particular error within the field of which they are an expert, but give that same person content on another field, and the content might pass muster. This is the same when people read a news article about something they have intimate knowledge on and are able to pick the flaws easily, but then turn the page and swallow whatever is said on a complex topic they know little about, as if it is accurate. It is an error of logic.

And as humans, we make them all of the time, which is why a lot of us turn to machines because we are under the illusion that they can do a better job than us, all of the time. They can't but they can likely do a better average job across thousands of fields than any of us as individuals, as ChatGPT can create content on hundreds of topics in the same time we are getting started on the first paragraph of one, and before we have even started researching.

There is no way for us to keep up with the AIs from a content production standpoint, so what it means is that in order to stay relevant, we have to battle on another front instead. Creativity is one of those, personality another. However, it seems that for now at least, they are learning to troll better than us, because people are applying what they get out of it at a professional level, even if it is inaccurate.

Just imagine the scenario of the lawyer above asking someone on the street for some cases that support his argument and in thirty seconds, the person reels off some names. What are the chances that the lawyer is going to believe them and then cite the cases in their case submission?

Zero.

But when it is coming from an AI, it apparently flies under the radar of good sense and gets a preapproval status. And remember, I will assume that this lawyer guy is at least smart enough to pass the bar at some point and has faked it long enough to be practicing law for three decades in some capacity. And, shouldn't he be the kind of person who would be skeptical by nature?

What hope do the rest of us Average Joes have?

It is like how everyone believes that they aren't influenced by advertising on the internet, even though the advertising revenue model undermines their position. We are all biased and therefore none of us are objective. Because of this, we are easily manipulated and nudged to act in ways that we might not have acted otherwise - like a hypnotist's trick making a person cluck like a chicken in front of an audience.

It is not about intelligence, it is about human nature.

Taraz
[ Gen1: Hive ]

Posted Using LeoFinance Alpha



0
0
0.000
25 comments
avatar

I totally agree with you
Cos I was also as naive until I read this
So AIs can just cook up response too? Just wow
The case of the lawyer is laughable thou (he should av verified the claim and cases b4 presenting in court)

0
0
0.000
avatar

Hello, my friend. To complement a case in law, so-called Jurisprudence is used, which are cases that have already been filed and whose purpose is only to either mitigate the sentence or increase it; depending on which party is the one citing the jurisprudence.

I was reading a related article, perhaps the same one you have read - this one was developed in Spain. To me it seems to be more of a yellowish set-up by the Editorial to create a scandal. I looked in other print media and found nothing about the case mentioned.

I know you don't like AI, but it existed before you and I were born: but of course, it is only recently that it is now being talked about and given that acronym.

AI is present in every instrument and tool used in a health centre, in the assembly of cars, televisions and so on. Maybe you have a Smart TV and enjoy watching a movie on it - that's where AI is present. When you see a word shaded on your mobile or PC screen suggesting that there is a spelling or syntax error, that's where AI is. Your mobile phone is wrapped in AI.

Wherever you turn your eyes, you will encounter the artificial phenomenon, it is and will be inevitable. For me, I enjoy what I can do with technological development, for me science, computer science..., have taken giant steps in favour of mankind; despite those who say that things are going in the wrong direction, I am sure that the future will be better and full of novelties...

0
0
0.000
avatar

I know you don't like AI

I have never said this btw. AI is hugely beneficial - I just don't like the way it is being used. I will clear that up, because I don't really like people putting words in my mouth, I have enough of my own in there already.

0
0
0.000
avatar
(Edited)

I just listened to an interview with Stephen Wolfram on AI referring to a recent article he had written. People forget that GIGO (garbage in, garbage out) still applies to AI, and the algorithms used to generate text are fundamentally reliant on probabilities based on prior data, not real intelligent creativity in action. Wolfram has some suggestions for how such AI systems could be used productively, and expanded upon to actually cite real data or perform mathematical calculations, but that capability is still in the future.

0
0
0.000
avatar

ChatGPT recently rolled out plugins, and Wolfram Alpha and others are available, so the ability to output stuff based on real data exists.

0
0
0.000
avatar

I knew it was in the works, but is it at the consumer level yet? Trying to keep up with this area of tech is exhausting when advancements are so rapid.

0
0
0.000
avatar

Yes, at the consumer level. Only for paid subscribers though.

0
0
0.000
avatar

There is a lot of garbage coming out, which also raises the question as to who is using it as a way to improve their content.

0
0
0.000
avatar

In a way, I see parallels to altcoins and NFTs. There are good ideas out there drowned by an ocean of over-hyped noise from people who are all about style over substance.

0
0
0.000
avatar

Yes - exactly. Style and wealth, over substance. "Rich at any cost"

0
0
0.000
avatar

Congratulations @tarazkp! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You have been a buzzy bee and published a post every day of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Hive Power Up Day - June 1st 2023
0
0
0.000
avatar

I think it would require a lot of time and that too in years to come up with counter argument for the AI in the legal cases. And they would require a lot of cunning lawyers and the people who spin the words for the AI to become a strong lawyer. But that and doctor AI would be brutal for the humanity. You never know what choices they can make and what we have to pay in the due process.

0
0
0.000
avatar

AI gets used in medicine a lot - for example, it is better at analysing MRI scans than humans already. But, when it comes to making social decisions, the source3 is going to be very important.

0
0
0.000
avatar

ChatGpt is moderately ok for general use and helpful to gather information in a short time. But I don't prefer ChatGpt for acquiring data for research purposes because I have seen that ChatGpt provided wrong information.

I only prefer to use ChatGpt for random use.

0
0
0.000
avatar

moderately useful, but definitely not something that should be relied on professionally, especially for specialized fields and information.

0
0
0.000
avatar

The human nature to avoid effort whenever possible. But we are ready to make the effort to prove it is, indeed, possible to not make effort. Focusing on spoiling ourselves much?

AI impostors should learn they are, too, what they eat and most information on the internet is trash, so don't expect to be something else. How do you learn to filter trash when your feedback is also trash?

0
0
0.000
avatar

Focusing on spoiling ourselves much?

I think so. Spoiling ourselves until we have nothing to do, because we have nothing to add.

How do you learn to filter trash when your feedback is also trash?

I think in general, we are getting dumber, more gullible, less discerning - yet feeling like we are more expert.

0
0
0.000
avatar

AI is just not accurate enough but I am surprised to see it make up a bunch of cases. It's funny to see that they can troll. I am guessing that lawyer learned a harsh lesson but I am not surprised that they are trying to cut corners.

0
0
0.000
avatar

That lawyer will only get dumber. I believe that such a person should rack his brains to know how to deal with the case instead of using AI.
It is not nice

0
0
0.000
avatar

Meanwhile there was me watching a news segment on something I knew a lot about and boggling at how so completely and utterly wrong they got it (literally they would have been slightly more correct if they'd done a very basic search before piling that load of trash in) and then never trusting any of them ever again because if they couldn't do even the most basic of basic research on that, then what else were they monumentally screwing up and why were people stupid enough to just blindly believe everythinng they said like it was hard fact?

probably because at one point they were supposed to fact check it but that fell by the wayside in the desperate rush to break it first at any and all cost

I don't know why it didn't occur to me that people did as you described but that explains a lot.

0
0
0.000
avatar

They are screwing it all up - possibly on purpose to fit their agenda. Most people aren't experts and most will not see the thought error they are making.

0
0
0.000
avatar

I think most try to see the best in everything and desperately want to believe that they wouldn't because they just wouldn't.

0
0
0.000
avatar

The deep fakes and purposely importing false information are the biggest red flag for AI currently and I'm not really sure what the data importing teams can do about it.

0
0
0.000
avatar

I think it comes down to "web of trust" mechanisms that will provide a confidence score to information in a decentralized manner.

0
0
0.000
avatar

Confidence score seems like a viable narrative.

0
0
0.000