Artificial intelligence and the question of ethics

avatar

Screenshot_2021-02-10 Künstliche Intelligenz und die Frage der Ethik .png

The time has come. Artificial intelligence is possible. From now on, it will spread inexorably and make life simple and beautiful ... This or something similar is the sound of the news that is regularly spread. Only so far they seem to come more from the longing of engineers than from reality. Moreover, it is not only technical questions that remain unanswered, but also ethical issues that still need to be clarified.

A lot has happened in the field of artificial intelligence in recent decades. In 1953, mathematician Alan Turing created the theoretical foundations of artificial intelligence. Among other things, he developed a computer program for playing chess. However, he had to perform the complicated calculations for this himself, because there was no suitable hardware yet.

Chess and more is possible

maxresdefault.jpg

Artificial intelligence has improved significantly. Just now, a computer working on the technology of "Google DeepMind" won two of two "Go" games against South Korean Lee Sedol. "Go" is considered the most complex strategy game ever, and Sedol is one of the best players in the world. But the artificial intelligence systems that exist so far are mostly highly specialized. That means a machine that plays "Go" can't tie its own shoelaces. And a machine that can look at any photo and immediately identify it as a cat (a highly complex process, since it may be pictured from behind or have its cat-like eyes closed) won't understand the simplest joke.

However, all this should not obscure the fact that research into artificial intelligence is indeed making great progress. The search engine "Google", for example, has long since not only worked with stored search words, but also has algorithms via "Google Brain" that network and "intelligently" find their own answers. The same applies to Siri and Cortana, the voice-controlled assistants on smartphones.

We are responsible for what we create

Screenshot_2021-02-10 Künstliche Intelligenz und die Frage der Ethik .png

In a recent interview about artificial intelligence and its theological significance, "Science Mike" McHargue explained that while he could not estimate the time frame in which it would develop, its progress was unstoppable. And he concluded that we as humans must take some kind of responsibility for what we create. Even if it was not "life" but only an "entity." Provocatively, he held, "You could even imagine a digital intelligence with consciousness that could not feel pain, suffering, or fear. For example, if you created such an intelligence that is extraordinarily smart, is it ethical to disable or delete it? ... And if it can feel fear and suffering, shouldn't it be given the same rights and protection as humans?"

Machines cannot take responsibility

The question of whether machines can assume any (moral) responsibility is certainly closer to our reality. This question is relevant, for example, for the self-driving cars that are currently on the road in Sweden, the USA and Germany. Until now, it was always said that they were absolutely safe, that the only source of error in road traffic was the human being. But in the meantime, the first accidents have occurred. Who bears the responsibility in such a case? The automatic, "intelligent" control system? And according to what ethical standards is it programmed? It is every driver's nightmare, but it happens again and again: an unavoidable accident. How should a programmer prepare self-driving cars for this? Should they rather run over the toddler or the pensioner when no more evasion is possible? What sounds sophistical to some is a point that philosophers like Oliver Bendel absolutely have to clarify. Is a car allowed to sacrifice the life of one person in order to save many? Or should it give the life of its occupants the benefit of the doubt?

AI-in-SA-legal-space.jpg

In an interview, Bendel warned against leaving complex decisions to robot cars. "The machine cannot take responsibility" and, in his opinion, cannot be blamed. He adds, "If we decide on life and death, there must be someone who bears the responsibility or takes the blame." With this, Bendel brings a thought into the current discussion that is as old as humanity. Already in the creation account, God instructs man to rule the world or to manage it responsibly (Genesis chapter 1, verse 28). Intelligence obviously needs the addition of responsibility.

CONCLUSION: The question of an ethics for artificial intelligence is gladly postponed by many computer scientists. "We are not that far yet ..." From my point of view, the discussion on this topic should take place right now.



0
0
0.000
2 comments