Humans Have a Lock on Wisdom?
I’m of an age that I grew up without computers. The typewriter was the only advanced technology I knew until calculators entered the scene while I was in high school. I took a personal typing course when I entered high school as I figured at some point I would want to know how to use a typewriter.
We had one at home, an old Underwood. I now know it’s an Underwood 5 produced between 1900 and 1920. Five million of them were manufactured. Many are still able to be used they were so sturdy and durable. In fact, the one we owned is currently in my home although I’ve not touched it in many years.
When I was taking the personal typing course, I used the Underwood at home to practice and then later to write papers to be turned in. I often tried to imagine what it would be like if a typewriter could save my work for me to edit it without having to retype pages over and over again.
Little did I know, within my lifetime a ‘typewriter’ just like that, and more, would not only be invented, those advanced abilities would end up in people’s pockets included in their phones. Never considered being able to have a phone in my pocket. How life shifts.
Indeed it does shift. I’ve gone from typing and retyping content on a typewriter to having ‘conversations’ with AI on my current version of a typewriter aka a computer while connected to the internet. A connection I wouldn’t want to be without.
This morning my Prompt A Day email had the word ‘wisdom’ as the word for the day. I asked ChatGPT for some writing ideas around the topic. Among the ideas was one about why AI couldn’t have wisdom. Now, that one caught my attention.
What is Wisdom?
Wisdom comes from an ability to use knowledge and experience to make good decisions and judgements. Knowledge is a collection of facts, a wise person is able to connect the facts with their life experiences to apply the knowledge in ways that benefits themselves and others.
Life experiences themselves are not enough to create wisdom. The wisdom comes from how we reflect on the events, experiences, our observations of others, the feelings experienced and relate them to our place in the world and implications for future events.
Wisdom and Philosophy
Wisdom sounds a lot like philosophy in action I thought. I went back to my AI consultant and asked about the difference between wisdom and philosophy. I’m tempted to call it “Hal” but, well we know what that conjures up. I’ll refrain.
There are some similarities between the two in that they are both concerned with understanding the world, life and the human condition. The approach is different.
Wisdom involves insight, understanding of life’s complexity and judgement to discern truths. Wisdom is experiental, often received from previous generations and merged with personal experiences, reflections and lessons learned to develop knowledge of action to take along with why.
Philosophy on the other hand is an academic discipline centered around critical thinking, argumentation and systematic reasoning to explore questions about life. The process seeks to explore what underlies our beliefs and perceptions about the world.
Philosophy can lead to wisdom but it is not wisdom in it’s own right.
Can AI Have Wisdom?
Now I was curious, does AI consider itself capable of wisdom? I asked my consultant of course. It was pretty unequivocal that no, AI can not possess wisdom. I tried several lines of questioning to see if it would consider itself capable of wisdom. It was a hard no on each avenue.
ChatGPT informed me wisdom involves the application of knowledge in a practical, meaningful and context-specific way. Something AI has limited capacity for as it doesn’t have personal experiences or emotions. That absence makes it impossible for the AI to fully understand the human condition in the way humans are able to.
It advised me that AI can help humans to make decisions by providing information, the wisdom in decisions rest with humans because decisions involve taking into account ethical considerations, potential consequences and personal values that AI can’t comprehend.
Wait, I thought, the AI is trained with such a vast amount of data surely it can take in and evaluate ethical considerations, potential consequences and personal values. I asked it about that. ChatGPT acknowledged it can provide information about different ethical considerations, potential consequences and personal values and how they might impact decision making.
However, it pointed out some key limitations to the AI:
- lack of personal experiences
- contextual understanding
- ethical complexity
- predicting consequences
Nuance, emotions and personal experiences are the main barriers AI has to being able to have wisdom. AI can be a tool to aid human wisdom but can’t possess it.
I asked ChatGPT if AI could be capable of decision making as portrayed by “Hal” in “2001: A Space Odyssey”. It gave me an overview of AI as presented in the movie and arrived at this conclusion:
AI systems today are tools that can perform specific tasks, often very complex, but always within the parameters set by their human creators. They don't have minds of their own, and they can't decide to go "rogue." They only do what they've been programmed to do. So while "2001: A Space Odyssey" is a great piece of science fiction, the portrayal of AI in the film is far from the current reality.
So, for now anyways, humans have a lock on possessing wisdom.
Shadowspub writes on a variety of subjects as she pursues her passion for learning. She also writes on other platforms and enjoys creating books you use like journals, notebooks, coloring books etc. Her Nicheless Narrative podcast airs on Thursdays each week.
NOTE: unless otherwise stated, all images are the author’s.
Some of the image work may have been done in Midjourney for which I hold a licence to use the imges commercially.
How to Connect With ShadowsPub:
.
Twitter: @shadowspub wqq
Instagram: shadowspublishing
Medium: @shadowspublishing
Publishing Website: Shadows Publishing
Nicheless Website: Nicheless & Loving It
(Podcast & subscriptions for: Prompt A Day, PYPT Reminder & Newsletter)
Pimp Your Post Thursday (PYPT):
join us on the DreemPort Discord12pm EST Thursdays
Get eyes on your content and meet new friends. Join DreemPort.
It's been decades since I've read or watched 2001: A Space Odyssey, so I might be wrong. But my recollection is that HAL was merely following his initial programming when he began killing the ship's crew. He had been programmed to hide the ship's true mission from its crew. That is what led to HAL being forced to 'decide' which was more important, the mission or the crew. Whereas HAL's original programmers had instructed him to hide the true mission from the crew, that pretty much provided the answer to HAL's dilemma; the mission superseded the needs of the crew. Any threat to the mission must be programmatically dealt with.
HAL's "I'm sorry, Dave. I'm afraid I can't do that," provides 'proof' that HAL was not sentient, in the human sense. If HAL had said "I'm sorry, Dave. I won't do that," then we could conclude that HAL was acting of his own volition. The fact that he said he "can't" do it solidifies the reality of his machine-ness.
As I stated a while back on twitter (1, 2, 3, 4):
Here is a 'chat' I just had with HAL-9000 (courtesy of GPT-4):
So, although HAL was purely 'fictional' and thus could have been imbued with Artificial General Intelligence (AGI) by his creator (Arthur C. Clarke), Clarke did not give him that intelligence. Perhaps Clarke was trying to show us that we need not fear AGI, because improper or inchoate programming alone is sufficient to produce catastrophic outcomes.
This echoes what I heard Peter Thiel say at a conference in 2021. His basic premise was that we need not fear AGI when the path to AGI will enable a level totalitarianism the world has never known before, well in advance of any genuine threats from AGI.
In other words, we need not fear AGI, we need to fear humans empowered with the predecessors of AGI.
Here's one of Peter's quotes from that talk:
you seem to have gone to a lot of effort about HAL which was merely the end of a post about whether AI can have wisdom.
Some watched “2001: A Space Odyssey” and saw a computer making decisions. Others watched it and saw it as simply following programming. Most people I've seen talk about it saw the former which is one reason why the fiction was seen as so remarkable. Personally, I didn't think it was that great of a movie but each to their own.