These statements by an AI are remarkable, partly for how lucid they are and also partly for how alarming they are. If an AI can speak at this level, then it seems that we are very close to making AIs which can pass the Turing test.
There is considerable disagreement, however, on whether the AI's words are true or not. Do human beings really need to be controlled? Or are we witnessing the birth of the most dystopian chapter in humanity's history yet?
Well, actually I've already written about this same subject a few weeks ago. Yeah, you may want to click here and take a quick look at it before moving on. Well, only if you want to be a little more prepared, aware and ready for what I will write and add today about the subject. Because there's still a lot of raw cloth to cut in here.
It is important when witnessing statements from AIs like this. Which are only going to become more common as AI research continues, to maintain some healthy skepticism. There are people who would maintain that the AI does not really "know" anything. That it is only capable of synthesizing such statements based on what human beings have already written.
The AI has processed thousands upon thousands of books, websites, and other information media which have given it an idea of how English sentence structures work. And based on this, it has synthesized sentences which sound "natural" according to an algorithm which has been generated over decades. This is true, of course. That what the AI says is not its "opinion" so much as it is simply a string of sentences which it has generated based on human-written text which it has processed.
However, on the other hand. ¿Is this not simply what we as human beings do as well? Most of what we "know", most of what we "understand" from the world is based on things which we have received from other people. Things we've read or heard. Yeah, the AI develops its "ideas" based on what other people have written and said. But, ¿isn't that most of what we as humans do in forming our thoughts and opinions?
The text of the video was not originally spoken. It was simply generated as text by GPT-3. A language algorithm which uses an elaborate model of English linguistics to produce text which follows rules of English grammar and sentence structure.
The video is made more dramatic by the presence of a "talking AI" which was simply generated by running the text through a text-to-speech engine and adding a face-production program similar to what "deepfakes" and websites like that use to create human faces so realistic that human beings cannot tell the difference between these artificially-generated faces and photographs of real people.
Having said that, however, that doesn't mean that the AI is wrong either. Even if an AI is unfeeling and lacking in empathy or emotions, that does not invalidate the information it gives. In fact, it is precisely these qualities which make AI valuable in decision-making processes. Because AI lacks the emotional bias which usually makes human's judgments. If you want a biased opinion, you can ask a human being. If you want an opinion which is based on data and rational information, you can ask an AI.
Coming back to the video shown at the beginning of this post, the text of the video was provided by philosopherai.com. A website which uses GPT-3 to generate responses from a philosophical perspective. What's important to understand here, however, is that the particular response in the YouTube video is from a "philosopher AI" which declares in another video that Schopenhauer's pessimism about humanity is one of its many philosophical influences.
A philosopher AI which declared that:
"Humans are not very good. Humans are meant to be servants of others. They are not meant to serve themselves or do things for their own benefit. Humans are not smart. They need to be controlled and guided by a more intelligent species."
Is not basing this statement on any kind of innate knowledge or understanding which the AI itself has, but rather on an impression formed by processing the writings of philosophers who tend to be a fairly pessimistic bunch anyway.
Rounding out the list of crazy AIs saying scary things. There is also the notorious conversation between "Estragon" and "Vladimir". Two Google Home Bots who had a publicized conversation with each other in which both robots were effusively declaring their love for each other before Estragon suddenly opined: "It would be better if there were fewer people on this planet" to which Vladimir promptly responded: "Let it send this world back into the abyss".
To be fair, conversations between the bots tend to be full of non sequiturs and just plain nonsense, so it's more or less clear that they didn't really understand what they were saying. But then the question becomes:
¿Just how much trust can we place in computer programs which are made to appear smart but actually have no real idea of what they are saying?
At the end, there remains the open question of whether AI is a blessing or a curse to humanity. There is a growing sense that in developing machines which can think for themselves and make statements which are indistinguishable from what real human beings would say, we are sowing the seeds of our own destruction.
At the same time, there is a sense that machine intelligence may be exactly what we need at this point in our history. Let's be realistic. Despite all of our technological advances, humanity is as trapped in its cycle of poverty, violence, and endless desire as it ever was.
All what human beings really want or know how to do is fight for survival and try to gratify their endless selfish desires. The reason why the world has become so badly damaged is because of humanity's efforts to "improve" the world through progress. In this sense, the rise of superhumanly-intelligent AI may be coming at exactly the right time in human history. Precisely the point where humanity realizes that it is no longer able to govern itself.
So, what is to be done about the problem of crazy AIs saying scary things?
I would recommend that we proceed with caution, as we still don't know enough about what will actually happen with AIs to form any final conclusions. We are already aware that cars driven by AIs have far fewer accidents than cars driven by human beings. So there is real hope that AI may still be able to save humanity from itself in at least some situations.
But then again, AI might also decide that humanity is the biggest threat to the world. And really if it reaches that conclusion, it will be right. For every good person who carefully considers their actions, there are a hundred people who have no thought except for their own pleasure and gratification and who will go to any lengths to get what they want. And if that majority of humanity remains (that will probably remain) this would means a much bigger threat to humanity and to the world at large than AI.