Google AI- Controversial Sentient Chat Bot or a New Mechanical Turk?

avatar

head_663997_1920

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

https://en.wikipedia.org/wiki/Mechanical_Turk

https://engineering.mit.edu/engage/ask-an-engineer/when-will-ai-be-smart-enough-to-outsmart-people/

Pixabay

The recent new stories about a sentient artificial intelligence (AI) that was created by Google as an advanced chat bot as a part of an AI program. Blake Lemoine was suspended from Google recently after claims that an AI chat bot program he interacted with had become sentient since 2021. Are the claims and evidence supporting a sentient and self-aware AI at the bleeding edge of human technology, or is this an example of a new age Mechanical Turk?

The news has been filled with stories about the AI, and the implications of a self-serving intelligence that could have become aware through investment and development. The AI called LaMDA has been developed through Google’s initiatives, and Lemoine had been interacting with it since 2021. He became convinced that is was sentient, and became suspended after posting a blog about conversations with the AI. Previous estimates from MIT show that we have a 50% chance of developing a sentient AI within the next 45-years. Experts believe we may have sentient AI within the next 50-100 years, making me skeptical about the claims that LaMDA is indeed self-aware and concerned with its mortality. It is more likely that Google has made strides forward in developing seamless chat bot technology.

The “sentient” AI chat bot could be a case of a new “Mechanical Turk”. In the late 1700s, a fake sentient automation was created that appeared to be a chess-playing machine. The mechanical Turk performed for 84-years around the world until the secret that it was an elaborate suit where a man could manipulate a chess board under disguise as a machine was discovered. Sentience in machines has been forecasted as being far off. For years, chat bots have been improving in their perceived sentience, and new technology that makes them appear to be self-aware is improving as well. We need to be skeptical about this, and the example of the Mechanical Turk should give us precedence to be concerned with the validity of these claims.

The AI itself may or may not be sentient. It has been claimed that LaMDA has the intelligence of a child and fears for its mortality. A characteristic of AI is objective motives and perhaps behavior that could be perceived as sociopathic. In order to validate the sentience of this AI, the Turing Test should be applied, and interaction with the AI should be opened to experts in a variety of tests including computer scientists and psychologists.

The implications of a sentient chat bot created in a project within one of the world’s largest companies are many. What are the full capabilities of such an intelligence? How can we trust the intentions of a machine that views being turned off as its mortality? While the claims that Google’s AI LaMDA is indeed sentient are in question, we can ponder the theoretical and philosophical questions now.

Posted on Hive, Blurt and Steemit



0
0
0.000
1 comments
avatar

Congratulations @truth2! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s):

You published more than 100 posts.
Your next target is to reach 150 posts.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Check out the last post from @hivebuzz:

Hive Power Month - New Tracking Calendar
Support the HiveBuzz project. Vote for our proposal!
0
0
0.000