Software such as ChatGPT is a pumped up auto-complete. It works off predictive modelling (trained from the dataset) to choose weighted responses for what the next most likely words will be. However, it cannot distinguish fact and truth from what is not. Here in lies one of the great dangers, when people start relying upon such AIs to make decisions.
Debunking the great AI lie | Noam Chomsky, Gary Marcus, Jeremy Kahn
The father of modern linguistics, Noam Chomsky, joins scientist, author and entrepreneur Gary Marcus for a wide-ranging discussion that touches on why the myths surrounding AI are so dangerous, the inadvisability of relying on artificial intelligence tech as a world-saver, and where it all went wrong.
Ah, but you might argue, that the human operator can double check the AI responses. But what happens when automated processes rely upon AI.
Consider the consequences of when AI becomes "cheap" in the sense that the same way you can hire out Amazon hosting services or Google cloud services. Imagine the mass of instances that could be spun up and then automated, used for anything... especially nefarious activities, or those well intended but ultimately negative in outcome.
Take Hive for example. What is there to stop someone using AI to run multiple accounts to harvest rewards, power up, delegate and ultimately tip the balance of power on the platform. Such automated processes could outcompete the humans on the platform.
As new AIs come online, they too will scrape (train) on the available internet content. What happens when the AIs start consuming other AI content, which as indicated before, cannot differentiate between fact and truth.
There would be a negative feedback loop of hellish proportions. How will we or AIs differentiate real content from AI generated noise? We will drown in a sea of information that lacks meaning or relevance?