Reacting to THE WAR: Debunking 'AI' - Part 2.2

avatar

Cover image

Intro

I recently had a brief exchange with @lighteye, who mentioned me in his article Mate in 19!? In the comments he invited me to examine a series of posts he has previously published that he entitled THE WAR: Debunking 'AI'. The title is partially self-explanatory, but it is only after reading the text that one better understands his views. Hence I invite you to read his posts and contrast your opinion on the matter.

I've already accepted LightEye's invitation and I have already started reacting to his second article entitled THE WAR: Debunking ‘AI’ – Criteria of a genuine AI (Part II of III). You can find the first part of my reaction here. The current post is the continuation. I will go through his ideas mostly in the order of his paragraphs, quoting verbatim to better represent him.

I hope that with this reaction the curious readers can learn a little more, clear their doubts and continue to form an educated opinion on the topic. Let's begin.


"Debunking ‘AI’ – Criteria of a genuine AI" Debunked - Part 2

In the first part we saw that among the proposed criteria were:

1. That the potentially intelligent machine should not depend on wires

We concluded that, if perhaps not satire, that claim was an oversimplification that additionally does not focus on intelligence capability per se. (LightEye.)

2. That the potentially intelligent machine must possess self-awareness

We conclude that this is a more serious argument, but that we do not really know whether it is necessary for an organism or an entity to have consciousness to demonstrate intelligence and this includes human-like intelligence. It is possible to speak of different degrees of intelligence and that is what experts do when they speak of animal intelligence, plant intelligence, artificial intelligence, etc.

Let us continue to examine the other criteria proposed in the article:

Intelligence implies self-awareness and self-sufficiency [...] But self-awareness [or rather self-sufficiency?] implies something else, more things that are much more difficult to simulate. For example, intelligence implies a tendency towards self-reproduction, as the brilliant Polish writer and philosopher Stanisław Lem described it through necro-evolution in the novel “The Invincible”. (LightEye.)


In the previous installment, we have responded to the argument from consciousness, as we have seen. Now let's address the self-sufficiency argument. I think Lighteye was referring to this in his last quote, that is, "self-sufficiency implies something else, more things that are much more difficult to simulate".

I agree with him that self-sufficiency adds an extra layer of complexity if we consider it essential to defining an intelligent being.

Lighteye supports his point with the novel The Invincible, which chronicles the adventures of a crew of humans on a distant planet inhabited by robots in the form of flying insects. These robots were created by an ancient civilization on that planet and have the ability to self-replicate and sustain themselves.

The problem is that self-sufficiency and reproduction are qualities that best describe life. And life, like intelligence, has no clear demarcation. It seems that the more complex physical processes become, the more difficult it is to delimit their boundaries.

We only know of biological life, that of natural origin, but there is speculation that artificial life could also exist and this is closest to what is shown in Stanisław Lem's novel. However, the self-replicating robot insects in the novel also show intelligence, so artificial intelligence also has a descriptive role there.

The truth is that Artificial Intelligence researchers do not usually think of artificial life or a machine with the same capabilities as those posited in dystopian literature and screenplays. The fact that they don't think this way does not prevent them from using the term Intelligence.

What really matters is how intelligence is defined, which we know has no single, agreed-upon conceptual definition. Self-sufficiency and self-replication are not common criteria for defining it operationally, that is, as the indicators that are measured in the object of study or the experiment in the laboratory.

The aspects of cognition that AI scientists and engineers do use as a reference in their research and development are perception, learning, reasoning, problem solving, linguistic interaction and, within these subfields, they trace more specific indicators of success.

For example, if we talk about an AI that can recognize objects in images, we could measure indicators such as accuracy, precision, recall, f1-score, processing time and robustness.

All these aspects are some that the UNESCO World Commission on the Ethics of Knowledge and Technology has taken into account to conceptualize artificial intelligence, to take a definitional reference.

It should be noted that this type of AI is classified as Weak or Specific Purpose Artificial Intelligence. It is an AI that only knows how to "think" and do one thing, and that is limited to its scope of application.

This is the AI we find today in any technology that uses AI, even in the most advanced language models. They are all purpose-specific AI. To call them intelligent, we use stricter and more precise criteria, such as the aforementioned aspects.

Noew, an idea that comes close to that of self-sufficiency and self-reproduction as criteria of intelligence is that of Alexander D. Wissner-Gross, who proposed a formula according to which intelligence is a physical process that tries to maximize its future freedom of action and avoid constraints on its own future. Listen for yourself:




⬆️Lecture by Alexander D. Wissner-Gross at Ted-Ed.

This is my personal interpretation. Others may not agree that the examples given by this author using his Entropica software resemble self-sufficiency and self-reproduction. Still, I find it an interesting and ambitious proposal.

I also think that self-sufficiency and self-reproduction could be related to artificial Artificial general intelligence (AGI), something that companies like OpenAI take very seriously, to the point of including it in their manifestos. AGI is that which is not limited to thinking and doing just one thing, but can think and do anything.

Moreover, some say that this intelligence must be able to learn and improve itself, constantly increasing its capabilities. In this context, it would not be strange for AI to use robotics and other technologies to manifest itself and exist beyond a computer or mobile screen.

In any case, it seems that the label intelligence is already implied, whether real or hypothetical. It is something that commentators and critics like Ligtheye seem to ignore, even if the criteria they demand are met.

Then we have curiosity, as the ability for the machine itself to choose which information to adopt, without human mediation, influence and bias. (LightEye.)


Curiosity is also a criterion bordering on a basic form of intelligence, but not necessary. We see that the total atropomorphization of the expectations that AI should have is quite dominant for commentators like Lighteye and others. Of course, there are other animals that also show curiosity, but after taking the whole context of his posts into account, it seems that LightEye means that the AI should be as human-like as possible. Let's see if we can continue to accumulate evidence that updates our beliefs in this hypothesis.

Another sure sign of intelligence is a sense of humor. Try telling the GPT Chatbot a silly joke by the late Serbian TV star Minimax. You will find neither a sense of humor nor curiosity to explain seemingly absurd connections. (LightEye.)


Eureka! Lighteye wants the AI to replicate human intelligence completely. What more anthropomorphic trait than the sense of humor of a late night TV show host?

However, it is important to know that advanced language models are already able to understand and replicate the sense of humor to a large extent. I even learned about the sense of humor embedded in language models dating back to 2020 such as Google's Meena, which was trained on 40 billion words and can generate humorous responses, and that's just a random, "outdated" example.


Meeana sense of humor


Meeana sense of humor

⬆️ Meena improvising with a sense of humor (pictures: Bot authors)

The truth is that language models have gotten impressively good and emulating a sense of humor is a relatively simple thing for them to do. But of course, they are not perfect and we shouldn't expect them to get it every time. I speculate that Lighteye hasn't interacted enough with ChatGPT, Bing and other big chatbots to notice. Even Twitter/X has released its own Chatbot Grok, which is interestingly sarcastic and seeks to emulate trolls.

In the next installment we will continue to react to the other proposed arguments that, according to Lighteye, a true AI should possess, such as creativity. See you then.


Notes

  • Most of the sources used for this article have been referenced between the lines.
  • Unless otherwise noted, the images in this article are in the public domain or are mine.


0
0
0.000
8 comments
avatar

This post has been manually curated by @steemflow from Indiaunited community. Join us on our Discord Server.

Do you know that you can earn a passive income by delegating to @indiaunited. We share more than 100 % of the curation rewards with the delegators in the form of IUC tokens. HP delegators and IUC token holders also get upto 20% additional vote weight.

Here are some handy links for delegations: 100HP, 250HP, 500HP, 1000HP.

image.png

100% of the rewards from this comment goes to the curator for their manual curation efforts. Please encourage the curator @steemflow by upvoting this comment and support the community by voting the posts made by @indiaunited.

0
0
0.000
avatar

Do you think that a potentially intelligent machine should depend on free regenerable energy, such as sunlight, wind, water? Just thinking. Thanks for this article. It made me think a lot.

0
0
0.000
avatar

It's an interesting question. In principle, the energy source has nothing to do with AI per se, but we see that there is a movement towards renewables in the world and for some reason. Maybe if we forget about the politically controversial stances and paradoxes around that, we could dream of becoming a type I civilization on the Kardashev scale for and with the help of AI :)

0
0
0.000
avatar

Haha! Ironically, I got some answers related to you comment so Pls allow me to share with you this ChatGPT thing:

"Becoming a Type I civilization on the Kardashev scale involves harnessing and utilizing the energy resources of an entire planet. The Kardashev scale, proposed by the Russian astrophysicist Nikolai Kardashev, is a way of measuring a civilization's technological advancement based on its ability to harness and use energy. Here are some ways in which humans, with the help of AI, could work towards achieving Type I status:

Renewable Energy and Efficiency:

Invest heavily in renewable energy sources such as solar, wind, hydro, and geothermal power.
Develop advanced energy storage technologies to efficiently store and manage the energy produced.
Use AI for optimizing energy grids, predicting energy demand, and improving overall energy efficiency.

Advanced Infrastructure:

Build smart cities with sustainable and efficient infrastructure.
Implement advanced transportation systems, including electric and autonomous vehicles.
Utilize AI for optimizing resource allocation, traffic management, and overall city planning.

Space Exploration and Resource Utilization:

Expand space exploration efforts to mine resources from asteroids and other celestial bodies.
Use AI for autonomous space missions, resource identification, and extraction.
Establish space-based solar power stations to harness energy from the sun continuously.

Environmental Conservation:

Implement policies and practices that focus on environmental conservation and sustainable living.
Utilize AI for monitoring and managing ecosystems, predicting natural disasters, and mitigating their impact.
Global Collaboration and Governance:

Foster international collaboration to address global challenges and pool resources.
Develop governance structures that promote the responsible and ethical use of AI and advanced technologies.

AI Research and Integration:

Invest in AI research to enhance its capabilities in problem-solving, decision-making, and automation.
Integrate AI into various aspects of society, including healthcare, education, and governance, to improve efficiency and effectiveness.

Education and Knowledge Sharing:

Promote education in science, technology, engineering, and mathematics (STEM) fields to build a skilled workforce.
Encourage knowledge sharing and open collaboration to accelerate technological advancements.

Ethical Considerations:

Address ethical considerations related to AI, ensuring responsible and equitable use.
Establish guidelines and regulations to prevent misuse and protect against unintended consequences.

Public Awareness and Engagement:

Increase public awareness of the importance of sustainable living and technological advancements.
Foster a sense of global citizenship and responsibility among the public.

Continuous Innovation:

Encourage and support research and development in cutting-edge technologies.
Foster a culture of innovation and entrepreneurship to drive continuous progress.
By combining these efforts, humans can work towards achieving Type I civilization status on the Kardashev scale, leveraging the power of AI to address complex challenges and propel technological advancement."

0
0
0.000
avatar

Wow, the know-it-all GPT always dazzling us with its answers!

0
0
0.000
avatar

Thanks for your contribution to the STEMsocial community. Feel free to join us on discord to get to know the rest of us!

Please consider delegating to the @stemsocial account (85% of the curation rewards are returned).

You may also include @stemsocial as a beneficiary of the rewards of this post to get a stronger support. 
 

0
0
0.000