The AI Flood

avatar

vlcsnap-2023-02-18-21h02m00s121.jpg

Software such as ChatGPT is a pumped up auto-complete. It works off predictive modelling (trained from the dataset) to choose weighted responses for what the next most likely words will be. However, it cannot distinguish fact and truth from what is not. Here in lies one of the great dangers, when people start relying upon such AIs to make decisions.

Debunking the great AI lie | Noam Chomsky, Gary Marcus, Jeremy Kahn

The father of modern linguistics, Noam Chomsky, joins scientist, author and entrepreneur Gary Marcus for a wide-ranging discussion that touches on why the myths surrounding AI are so dangerous, the inadvisability of relying on artificial intelligence tech as a world-saver, and where it all went wrong.

Ah, but you might argue, that the human operator can double check the AI responses. But what happens when automated processes rely upon AI.

Consider the consequences of when AI becomes "cheap" in the sense that the same way you can hire out Amazon hosting services or Google cloud services. Imagine the mass of instances that could be spun up and then automated, used for anything... especially nefarious activities, or those well intended but ultimately negative in outcome.

Take Hive for example. What is there to stop someone using AI to run multiple accounts to harvest rewards, power up, delegate and ultimately tip the balance of power on the platform. Such automated processes could outcompete the humans on the platform.

As new AIs come online, they too will scrape (train) on the available internet content. What happens when the AIs start consuming other AI content, which as indicated before, cannot differentiate between fact and truth.

There would be a negative feedback loop of hellish proportions. How will we or AIs differentiate real content from AI generated noise? We will drown in a sea of information that lacks meaning or relevance?



0
0
0.000
3 comments
avatar

I can see the horror story but I also can't stop laughing at the thought of AI training off other AI and if they don't somehow manage to develop actual intelligence somewhere along the way, just what as freaking hilarious (and probably catastrophic) train wreck that would be.

But AI not being able to tell fact and truth (which is what I think you meant to say towards the closing rather than it beinng unable to distinguish between fact or truth? Or is there some deeper meaning that I missed?) is pretty much on par with humans in a lot of cases.

0
0
0.000
avatar

It only takes a little bit of logical thinking to see where this might end up.

OK, so if humans in cases cannot distinguish facts or truth, how will it ever be possible for AI to do so? Think about it. The AI will only ever be able to do that if the humans that created it program it to do so. Thus if the humans suffer biases or subscribe to fallacies, or failures in logical thinking, in other words, our own short comings will be encoded into it, if not exaggerated because there will be the assumption, the AI will be "know" better and be above that.

If you pay careful attention to the content on the topic online, you will already see this tendency developing.

0
0
0.000
avatar

but I also can't stop laughing at the thought of AI training off other AI

Yes, it would quite Monty Python wouldn't it? 😃

0
0
0.000