The Turing Test and Machine Intelligence (part II)

avatar

👁‍🗨👨🤖 The Turing Test and Machine Intelligence (part II)


By Enio...


Last time we were talking about the details of the Turing Test, which is the modern name given to an experiment devised by the pioneering computer scientist Alan Turing in the middle of the last century and which he made known in the academic paper called Computer Machine and Intelligence, which in this 2022 is about 72 years old, but whose problems are still valid.

We said that the proposed experiment consists of measuring the success that an artificial entity, be it a machine, a computer or a program, can achieve in a game typical of human beings, where the indicator of success is to demonstrate the same performance as a human player.

But such a game is not just any game, since computers match us and win in a multiplicity of games, but a very specific one that Turing called Imitation Game. Its difficulty for the artificial entity lies in the fact that the latter must use a natural language and with it exhibit an intelligent behavior that humans attribute uniquely and unmistakably to another human being. Hence the term 'imitation'.

The idea behind the measurement is to know whether an artificial entity has acquired the thinking quality, that is, it can think, so in principle the Turing Test provides a method to investigate whether computers-machine-programs can implement intellectual processes comparable to human thought. This is a topic of interest and a kind of Holy Grail within the discipline of Artificial Intelligence.

Alan Turing
⬆️ Alan Turing, British mathematician, pioneering computer scientist, critographer and philosopher. Photo: Mike McBey (CC BY SA 3.0).

We also discuss some cases of chatbot programs that have been relatively successful in exhibiting some intelligent behaviors that humans ascribe to other humans, and that some chatbots such as Eugene Goostman have indeed managed to pass the Turing test, but not in a way that enjoys the consensus acceptance and acclaim of the scientific community. The reason: all the cases exhibit the appearance of thinking, but they are not really artificial entities that can think.

Hence it has been inevitable to ask: Is it then that the Turing Test can deal only with appearances instead of authentic intellectual processes? Is it a defect of the test itself? Is it not rather that computers will never be able to think and be as intelligent as humans? If so, are there conclusive obstacles that prevent it?

Among flaws, appearances, objections and obstacles, the truth is that there are reasonable points and they deserve to be answered. In fact, Turing devotes a part of his essay to respond to several objections to the ideas behind his experiment, some of which already dated from his time, while there are others that he imagined might become more serious in the future. And he was right: over the years some objections have been revitalized and critical remarks to the Turing Test and the thinking computer project have emerged.

In the following I will comment on some controversial implications of the Turing Test that are part of the debate inaugurated by him within the area of artificial intelligence and even beyond. I propose to discuss those that I personally find most interesting, for which I also offer my own opinions. I even propose to organize and present them in three categories: controversies on feasibility, controversies on consequences and controversies on methodology. The following outline will help.

Outline of controversies around the Turing Test
⬆️ Categories of controversial implications that we will address. Magazine cover image: NewScientist

Controversies about feasibility

Objections to the real possibility of developing a strong artificial intelligence are grouped here. This refers to that intelligence of non-biological origin that is as high as that of the human being or even higher, in which case we would also call it superintelligence. We should clarify that by the time Turing addressed the issue, the term artificial intelligence had not yet been coined, so he was speaking rather of "thinking machines". When we speak here of thinking computers and strong artificial intelligence we are basically alluding to the same thing.

Now, will it be theoretically valid, physically possible and technologically feasible to create an artificial entity that possesses such a level of intelligence equal to or greater than that of a human being? While researchers are working to increase our knowledge in this regard, we hear good logical arguments for and against this idea, but science does not have a definitive answer yet. I think that coming up with a positive answer to these questions involves overcoming different barriers in the research problem: the theoretical barrier, the physical barrier and the technical barrier.

But let's take a look at the arguments in favor of the negative answer, which are several and interesting.

Unraveling consciousness - a prerequisite?

One of the focal points of controversy in this subject has to do with the capacity of consciousness, that which allows us to have a sense of identity, to recognize what is happening in our environment and to feel our own existence. Many other animals have acquired this capacity, but in humans it stands out considerably.

Do computers have to be aware of their existence to achieve a high degree of intelligence? This is something that is often claimed. For example, in the first half of the 20th century the following argument was heard from the British neurologist Geoffrey Jefferson whom Turing quotes as saying:

Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.

This is a fairly common opinion even in our days, shared by many people who want to have an opinion on the subject, from the highly educated to the poorly educated, and it is quite driven by common sense, as it is hard to believe that machines can develop and demonstrate sensitivity and other qualities that a human being possesses. Consciousness is an internal experience that human beings feel very special.

Turing bluntly qualifies this position as solipsist, that is, the philosophical belief that conscious beings can only be sure of their own consciousness, not of the consciousness of others, so it could not be used against proving the consciousness of machines. Turing considers it an extremist position if alleged as an obstacle and I would add that it is a bit romantic, i.e., argued from feelings. Turing also considers that, instead of adopting such a position, it is better not to hinder research on thinking machines that uses his test as a basis.

However, viewing comments such as Professor Jefferson's as a rhetorical device and putting aside the recrimination of solipsism, the question remains valid: must computers be aware of their existence in order to achieve a high degree of intelligence?

I do not propose here to define both concepts, but it is safe to say that both phenomena are quite related, for we know that physiological damage to the brain can affect both the consciousness and the intellectual capacities of an individual. This also suggests to us in principle that even though the constitution of both processes is based on a very complicated and exquisite organization of matter, they are still physical phenomena, so they can be studied and perhaps even replicated in the laboratory.

However, this last statement is dangerous, because there is a debate around the nature and origin of consciousness even more arduous than the artificial intelligence debate, where we have physicalist ideas (those that affirm that only the physical is real), some of which are mutually exclusive, and dualist ideas (those that affirm that consciousness transcends the physical). This disjunction basically tells us that science has not yet deciphered the mystery of consciousness and, therefore, we have no answer to the question posed.

Mind-philosophies
⬆️ A diagram with neutral monism compared to Cartesian dualism, physicalism and idealism. A long-standing debate about the mind, although most scientists today are more inclined toward physicalist ideas. Image: Wikimedia Commons

One of the leading scientists currently arguing against the feasibility of conscious computers is theoretical physicist Roger Penrose. He has theorized that consciousness originates at the quantum level within neurons and not from processes between them, a hypothesis that he and Stuart Hameroff call Orchestrated objective reduction. If true, this would add an additional layer of impediments to the thinking computer project as long as there is a connection between consciousness and thought.

Other scholars such as David Wallace, on the other hand, believe that although intuition may lead us to think that consciousness is special and requires special explanations, this need not necessarily be true, just as we do not need a fundamental physical theory to explain digestion or respiration. Consciousness could perfectly well be generated at the level of neural activity, so physicists would not need to solve the problem, but biologists and neurologists would.

In Turing's time, when this debate was not yet so heated, recognizing the paradoxes and difficulties of the problem, Turing was inclined to consider it unnecessary to solve the mystery of consciousness in order to know whether computers will be able to think. Today, however, we are not so sure we agree with him, but that does not make us any less optimistic. We will have to wait for more research to understand more about the connection of consciousness capability with artificial intelligence.

Mathematics has something to say

Another of the interesting arguments for me against the feasibility of thinking computers is the mathematical objection, which Turing had already considered in his time, but which years later has been re-emphasized. What say can mathematics have in this respect? Plenty, for the theory of computation belongs to the branch of mathematics, especially discrete mathematics, and is built on formal models, such as those Turing himself used in On Computable Numbers with an Application to the Entscheidungsproblem (1937).

According to the mathematical objection, computational logic has clear limits and this is well established by Gödel's Incompleteness Theorems (1931) and Turing's Halting Problem (1937) which answer the question of the consistency, completeness and decidability of mathematics following a general procedure. These theorems show that there is a set of logical problems that cannot be solved in an algorithmic way, which happens to be the very essence of computation.

There is no simple way to exemplify this, since we are talking about quite formal systems, but the fundamental idea is that when trying to solve a so-called Gödel's sentence, the artificial intelligence program could enter a loop whose number of iterations can be huge or infinite, so that the program could _never stop or produce an output _. In other words, the program that simulates thought could be sabotaged, crashed, and given away when confronted with certain kinds of logical problems.

This has two implications: on the one hand, there is the possibility of giving the program away during the imitation game, where the judge could pose the participants a logic problem where the machines are supposed to fail. In reality, the program could have a mechanism to avoid solving the logic problem, just as the human participant will most likely do, since the logic problem is an extremely formal and technical matter and not every human being has the necessary knowledge and skills to solve it.

However, the second implication is that, in theory, the program will always be unable to beat the person at the logic problem, which assumes that it will not have the same intellectual capacity of a human being, revealing at least that computers that simulate thinking will never be able to match us.

Even worse, in more recent years it is suggested that, unlike human beings, the artificial entity could not even simulate thought, because the mechanics of the latter is appreciably different from the way computers operate. In essence, according to this argument it follows that:

  1. The ability to think cannot be algorithmic.
  2. The functioning of an intelligent computational entity is necessarily algorithmic.
  3. Humans can think.

This being so, the first logical conclusion is that the artificial entity, by requiring the implementation of an algorithm, would be governed by the principles of computational logic and, therefore, would have insurmountable limitations that would not allow it to reach the level of intelligence necessary to think. In turn, with premise three it can be concluded that human beings can think because the nature of their thinking is not algorithmic and escapes the limitations of computation.

This reasoning is known as the Lucas-Penrose constraint and if proven true, would qualify as a theoretical barrier, since it would be theoretically impossible to build an artificial intelligence based on computational and discrete-state machines.

In the case of Penrose, as already mentioned, he has also pointed out, with his proposal of Orchestrated objective reduction, possible limitations of computation to reach or simulate consciousness.

Libro de Penrose
⬆️ Shadows of the mind: A Search for the Missing Science of Consciousness (1994), one of Roger Penrose's works where he debates the incomputability of consciousness, deepening what he had raised in The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics (1989). Image: Wikimedia Commons

The good news for artificial intelligence researchers is that both the orchestrated objective reduction and the Lucas-Penrose constraint have not been fully demonstrated scientifically either, so they remain only theoretical models and logical arguments. In particular it is not known whether humans really escape the Lucas-Penrose constraint, and so did Alan Turing 70 years ago when he responded to the mathematical objection. He said:

Although it is established that there are limitations to the Powers If any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect.

However, of all the existing objections this may be among the most serious and so Turing also acknowledges, "But I do not think this view can be dismissed quite so lightly." We will remain in expectation.

What other objections are held against the feasibility of developing strong artificial intelligence? And what about the methodological objections and implications of this research? We will be covering these in a future article.


Some references

  • Turing, A. M. (1950) Computing Machinery and Intelligence. Mind 49: 433-460. PDF version here
  • Saygin, Cicekli y Akman (2000) El test de Turing: 50 años después. Minds and Machines: 463–518. Artículo en inglés PDF version here
  • Lucas J. R (1959) Minds, machines and Gödel. Philosophy XXXVI: 112-127.
  • Penrose R. (1994) Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford University Press. ISBN 978-0-19-853978-0.

If you are interested in more STEM (Science, Technology, Engineering and Mathematics) topics, check out the STEMSocial community, where you can find more quality content and also make your contributions. You can join the STEMSocial Discord server to participate even more in our community and check out the weekly distilled.



Notes

  • The cover image is by the author and was created with public domain images.
  • Unless otherwise indicated, images are the author's or in the public domain.


0
0
0.000
22 comments
avatar

They literally have attempted my murder and are trying to kill me with V2K and RNM. Five years this has been happening to me, it started here, around people that are still here. Homeland security has done nothing at all, they are not here to protect us. Dont we pay them to stop shit like this? The NSA, CIA, FBI, Police and our Government has done nothing. Just like they did with the Havana Syndrome, nothing. Patriot Act my ass. The American government is completely incompetent. The NSA should be taken over by the military and contained Immediately for investigation. I bet we can get to the sources of V2K and RNM then. https://ecency.com/fyrstikken/@fairandbalanced/i-am-the-only-motherfucker-on-the-internet-pointing-to-a-direct-source-for-voice-to-skull-electronic-terrorism ..... https://ecency.com/gangstalking/@acousticpulses/electronic-terrorism-and-gaslighting--if-you-downvote-this-post-you-are-part-of-the-problem if you run into one of them you may want to immediately shoot them in the face. 187, annihilate, asphyxiate, assassinate, behead, bleed, bludgeon, boil, bomb, bone, burn, bury, butcher, cap, casket, choke, chop, club, crucify, crush, curb, decapitate, decimate, deflesh, demolish, destroy, devein, disembowel, dismember, drown, electrocute, eliminate, end, euthanize, eviscerate, execute, explode, exterminate, extinguish, finish, fry, grind, guillotine, gut, hack, hang, hit, ice, implode, incinerate, kill, liquidate, lynch, massacre, maul, microwave, mutilate, neutralize, obliterate, off, pop, poison, punnish, quarter, ruin, shank, shock, shoot, shred, skin, slay, slaughter, smoke, smother, snipe, snuff, squish, stab, strangle, stone, suffocate, suicide, SWAT, swing, terminate, torture, terrorize, whack, waste, wreck. You better fucking kill me.

0
0
0.000
avatar

Well, lets see how it goes with computers in the future! But I agree that for now computers need someone to add some logic in their programs!
!1UP

0
0
0.000
avatar

Yeah! Although with some revolutionary techniques, they're already able to write their "code". We can't understand it at all, but it works. Thanks for your visit.

0
0
0.000
avatar
Don-1UP-Cheers-Cartel-250px.png

You have received a 1UP from @gwajnberg!

The following @oneup-cartel family members will soon upvote your post:
@stem-curator, @vyb-curator, @pob-curator, @neoxag-curator
And they will bring !PIZZA 🍕

Learn more about our delegation service to earn daily rewards. Join the family on Discord.

0
0
0.000
avatar

Thanks for your contribution to the STEMsocial community. Feel free to join us on discord to get to know the rest of us!

Please consider delegating to the @stemsocial account (85% of the curation rewards are returned).

Thanks for including @stemsocial as a beneficiary, which gives you stronger support. 
 

0
0
0.000
avatar

Only 65? This one was better than the first installment 💔 😅 I'm kidding, LOL. Thank you.

0
0
0.000
avatar

Yeah the key problem is that we can't have direct access to the consciousness of others. Nevermind robots: scientists are debating whether certain animals have consciousnesses. Do cockroaches have a consciousness? The question enters my mind every time I spray one and see it writhing. Many religious people don't even believe that other mammals, like dogs, have a consciousness!

Since consciousness is an inherently private thing, I doubt it can be detected using the traditional ways we use in science to investigate matter.

The closest I have come to figuring out a real test of consciousness is the following. Suppose scientists slowly shut down parts of my brain. How would I, subjectively, feel? Would I feel that I'm becoming less conscious? Alzheimer's sufferers show that probably I'd have no clue. But what if they then started turning on those parts again? I think I would know that, compared to before, I'm more conscious now. So, similarly, perhaps, let's say we have a robot brain. We could shut down parts of my human brain, and turn on robot parts that we connected to my brain, and then reverse the procedure, and compare how a person felt when he was becoming more and less robotic.

In principle, I see no reason why a future science couldn't get us back and forth through some mental state. You can probably give me drugs and make me super angry, or super calm. Since everything is matter, why can't we slowly make a human brain into an animal one, or a robot one, and then back again? The person would probably know or remember whether he was becoming less or more conscious during the procedure.

You get the point. It's sci fi stuff, but I can't see why, in principle, it couldn't work and, more importantly, I can't see what else could work! I definitely don't think Turing's test works. It wouldn't be much different from male beetles thinking bottles are female beetles.

0
0
0.000
avatar

Since consciousness is an inherently private thing, I doubt it can be detected using the traditional ways we use in science to investigate matter.

Yeah, your take on consciousness is similar to solipsism. On the other hand, while it is true that some organisms, such as cockroaches and amoebas, can have a level of awareness of their environment that can even trigger an apparent "panic attack", I think that is still different from being conscious. I am more inclined to think like David Wallace: consciousness seems mysterious because it is complicated and we have not yet understood it fully, but it is still a physical process and we should not need special theories to explain it.

You get the point. It's sci fi stuff, but I can't see why, in principle, it couldn't work and, more importantly, I can't see what else could work!

Yes, what we don't know is whether that conscious machine can be based on discrete states and algorithms. Turing had already imagined that creating a conscious artificial being of some kind must be possible since we exist, we are the living example. The problem is what should be the minimal nature of that machine. Our brain is analogical and we don't know yet if thinking is computational. Maybe that is the limit or maybe not.

Thanks for the discussion!

0
0
0.000
avatar

Thanks a lot for this second episode, that I enjoyed as much as the first one. I have a couple of comments/remarks, if I may :)

We should clarify that by the time Turing addressed the issue, the term artificial intelligence had not yet been coined, so he was speaking rather of "thinking machines".

I know I could interrogate the web for it, but it is funnier to ask you about it. When has the term “artificial intelligence” been coined for the first time? Do you know?

This disjunction basically tells us that science has not yet deciphered the mystery of consciousness and, therefore, we have no answer to the question posed.

I was about to comment that this was also a question for the quantum world. Can quantum mechanics explain consciousness? And then you of course pre-answer my comment directly in your blog… You well cornered it… This is kind of an active field of research those days. We can find plenty of references online!

0
0
0.000
avatar

When has the term “artificial intelligence” been coined for the first time? Do you know?

It was coined by professor John McCarthy in 1956, although the term had to wait longer to take off.

I was about to comment that this was also a question for the quantum world. Can quantum mechanics explain consciousness? And then you of course pre-answer my comment directly in your blog… You well cornered it…

LOL. It's must be like reading a entertaining story, you know.

Thanks for your visit and comments!

0
0
0.000
avatar

It was coined by professor John McCarthy in 1956, although the term had to wait longer to take off.

Thanks for the clarifications. As always, cool names take some time to take off (if you have time, you may be tented to check out "penguin diagrams" and how they became world famous.

Cheers!

0
0
0.000
avatar

Have you seen the very recent chat protocols between Google engineers and the "AI" LaMDA? Very disturbing stuff. Most likely LaMDA would easily pass the Turing test (if it would not give away the fact that it is artificial, it is very open about it).
But it shows that the test in itself is flawed. To pass the test, it is sufficient to make others believe that you are self-reflective, but nobody can really look inside a human being, an animal, or an algorithm.
So how to define then consciousness?
And another question: If sooner or later a computer program will be that smart so that it has more intellectual capacity than any human (after the "singularity"), how important it is at all if it´s emotions and feelings are "real" or only artificial? I think, I don´t care, we should still treat that being respectfully. Because likewise my feelings could also be faked and you would never know, right?

0
0
0.000
avatar

That's very interesting. I'll look into that later. I had checked another Google chatbot called Meena that is also very promising.

But it shows that the test in itself is flawed. To pass the test, it is sufficient to make others believe that you are self-reflective, but nobody can really look inside a human being, an animal, or an algorithm.

Yeah, it's true, it's becoming clear that these bots can fool us without showing real understanding or consciousness.

And another question: If sooner or later a computer program will be that smart so that it has more intellectual capacity than any human (after the "singularity"), how important it is at all if it´s emotions and feelings are "real" or only artificial?

Well, I guess we'll have to invent ethics for it. The discussion has already started. I have heard of some researchers who are a bit in love with robots and take them seriously. To them, consciousness is just an appearance, so any entity that displays it should be entitled to proper treatment. I wonder if Sam Harris's theories of an objective morality might be useful at a certain point. From that point of view, we can tell good things from bad things based on the good we do or the harm we inflict on conscious beings.

0
0
0.000
avatar

Indeed, we have to find a way of collaboration. Although some take it to the extreme and are going to worship a true AI like a god. There was even a church founded for this, but it was closed recently.

0
0
0.000
avatar

LOL. It should be rather a parody, like Pastafarianism or when Richard Stallman dresses up as a preacher.

0
0
0.000