Will AI outsmart us and takeover?

in Project HOPE2 years ago

We have all seen the blockbuster Sci-Fi movies where intelligent machines go wild and start taking over. Some movies even portraying extinction level events that wipe out humankind. But how real is this type of scenario and how can we prevent it?

“AI is a fundamental risk to the existence of human civilization in a way that car accidents, aeroplane crashes, faulty drugs or bad food were not — they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole.”

Elon Musk


After a slow start, where AI was not possible due to limited computing power, we are now in the age where AI is advancing quickly. AI mahines beating world chess champions at a game the computer learnt, is a good indication of how far we have come and it opens our imagination to the possibilities.

AI machines are given objectives that they need to reach. Like winning a game of chess. Or driving a car safely to a destination whilst adhering to all traffic regulations. However, what happens when the AI overplays the scenario and finds an unexpected way to achieve the objective? In 2012, an AI programmer called Murphy programmed a machine to teach itself Nintendo Tetris with the objective not to lose. So what did it do? It pressed the pause button. The only certain way that it learnt it would never lose.

Imagine if a powerful AI is given the objective to solve the problem of plastics in the oceans. An AI may learn that the cause of the plastics in the oceans is humankind. Therefore, the best way to prevent plastics from being in the oceans is to eliminate humans.

Finding Solutions

It is not easy to find solutions to this problem. Some have suggested a big red off button that could disable the AI if it became too powerful. However, just like the pause button, how do we know the AI won't find out about it? Some have explored ways to contain the AI so that it will not know about the off button but isn't this like saying we can stay one move ahead of the AI in a game of chess? The AI could become curious about what is happening behind its back.

Others have suggested that we should try to teach AI our human values so that they will value the things that we value. Or that the trick is giving the AI the right objective. Giving the AI the objective the same a humankind would be a vague and open-ended objective. We don't even all agree what our human objective is in this world. So it would need to consider the ever-changing personal objectives of 9 billion people and use that to try to figure out what our collective objective is. Perhaps this vague approach could be provide our protection.

Another option is just to give the AI the objective to serve us. But do we want AI to be subservient to us? There are also enough Sci-Fi movies that explore the avenue of when AI start having rights of their own and surely if they are subservient, we are saying they don't have rights.

There are also those who are putting mechanisms into AI that will allow them to forget certain things that they have learnt. This is something similar to a neuralizer in the Men in Black movies. Though again, it seems that this is just keeping one step ahead in a game of chess.


So, I do believe that there are big potential risks here. It means that computer scientists need to be very careful and thoughtful about what they create. However, like with many things, I believe the biggest threat is that of rogue agents. I mean, if we consider nuclear technology, scientists have done a good job (generally) of keeping it safe. However, the biggest threat is that of some rouge agent constructing a dirty bomb that causes huge amounts of damage. It is perhaps the same with AI, however well the computer scientists do to prevent an extension level event, there will always be evil people that want to cause damage. And the more powerful the technology, the bigger the potential damage that can be caused.

Image source: Pexels


It is right! The AI ​​will always work according to the programmer's intentions. The biggest threat of AI is "Deep Learnig", just thinking about a scenario where the machine learns on its own and makes its own decisions could be something very delicate. But, I always have the question: Isn't it the same man who programs the machine? So, this makes me conclude that the machine itself does not represent any danger, but rather man and his free will...

@awah, has developed a great theme. Very interesting and broad-minded.
Thanks for sharing!

Hi @nachomolina - Thanks for your valuable comment. I do believe it is an interesting dilemma - we can debate the safest way to deploy AI tech but our biggest problem in society will always be the minority with extremist and evil views.

Let's hope that man knows how to take advantage of AI without attempting against himself.
Regards, @awah

Yay! 🤗
Your post has been boosted with Ecency Points.
Continue earning Points just by using https://ecency.com, every action is rewarded (being online, posting, commenting, reblog, vote and more).

Support Ecency, check our proposal:
Ecency: https://ecency.com/proposals/141
Hivesigner: Vote for Proposal

This post has been manually curated by @bala41288 from Indiaunited community. Join us on our Discord Server.

Do you know that you can earn a passive income by delegating to @indiaunited. We share 80 % of the curation rewards with the delegators.

Here are some handy links for delegations: 100HP, 250HP, 500HP, 1000HP.

Read our latest announcement post to get more information.


Please contribute to the community by upvoting this comment and posts made by @indiaunited.

After all, AI always She'll need a human to start her up and I think it is important to be programmed so that recognize who made it: humans In that sense, ethics should be programmed into the machine, and perhaps that provides a chance that it does not turn against us.

A controversial and interesting topic! .

I think ethics being programmed in is a good safety mechanism.

Thanks for your comment - stay well my friend.