Would we be able to control the super smart machines?

avatar
Some experts believe not.

AI peligosa1.jpg
Some researchers warn that superintelligence would be difficult to control. Image edited by the author, original from Wikimedia Commons.

We have long been fascinated by what the power of machines is doing. Today it is unthinkable to communicate without using a machine, to perform some procedure such as paying for services without resorting to the use of them, they have managed to make cars drive themselves, compose melodies and even control our weapons. And it is here when we realize the amount of tasks that already depend on the machines, and that is when questions arise as someday the machines will control us completely? Well, many scientists are already asking this question.

As more progress is made in Artificial Intelligence (AI), many scientists and philosophers are beginning to warn of the implications of superintelligent AI. Using some theoretical models, a group of scientists from the Max Planck Institute's Center for Humans and Machines, shows that it would not be possible to control a superintelligent AI. This information was recently published in the Journal of Artificial Intelligence Research.

Let's suppose that someone would be able to program an AI with an intelligence much higher than the human one, that it would be connected to the internet and that it could learn in an autonomous way, so, this AI would be able to have access to all the data of the humanity, it could change the existing systems and control the machines online. Would this AI create then a utopia or a dystopia?

Faced with this hypothetical super-intelligent agent, even more so than the most brilliant and gifted human minds, several scientists, philosophers and technologists are rekindling the debate about the potential risks that such an entity would have to develop. Especially because an international team of computer scientists, using theoretical models, demonstrated that it would be fundamentally impossible to control a superintelligent AI. In the mentioned article, it is traced some proposals for the containment of this entity, but the alarming thing is that they sustain that its total containment is, in principle, impossible, due to the limits inherent to the own computation, since, we should suppose that the superintelligence will contain all the programs that can be executed by a universal machine.

ai peligrosa3.jpg
This super-intelligence would contain all the programs executable by a machine. Image credir: Pixabay.com.

The article addresses two ideas for keeping superintelligence under control. On the one hand, its capacity could be limited by isolating it from the Internet so that it has no contact with the outside world, but this would limit its overall purpose and make it incapable of responding to all of humanity's needs. So, in the absence of this option, AI could be programmed with the principle of not harming humanity.

But this idea has its limitations. The team of researchers wrote a theoretical algorithm that would ensure that AI would not cause harm to humanity under any circumstances by first simulating the behavior of an AI and then stopping it if it was considered harmful. The problem is that under the current computing paradigm, such an algorithm is not possible to build.

It seems that computer scientists have a lot of problems to find a containment algorithm, and this is certainly something that must be solved, because if we talk about creating something very intelligent we must consider that it becomes very powerful, and if we do not have a way to control it we could give rise to the existence of a system that gives priority to its objectives to the detriment of ours. And well, although this seems to be taken from science fiction, let's remember that we humans have extensive experience in using technology in a malicious way, and we use it not only for productive purposes but also for destructive ones, and that's the other side of such an advanced technology.


Thanks for coming by to read friends, I hope you liked the information. See you next time.




0
0
0.000
0 comments