If a Machine Says It Feels, Does That Mean It Does? A Reflection on AI 'Sentience'
If we're to argue that A.I is getting anywhere near sentience, we have to define 'sentience'. Arguably, a loose definition is that any being that can feel and experience sensations is sentient.
A worm has sentience. A chicken. It means that if they have sentience like we do, then we must place them in the same moral category, and probably give them rights too.
But if we are talking about prawns and gnats and sheep, sentience is not equated with intelligence - or at least the kind of high functioning, high reasoning intelligence we are clearly talking about - man, and AI.
It seems many here are equating high intelligence with 'sentience', and I've even see arguments that say that because A.I (in that instance, a LLM chat bot) says they 'feel' things, it must be sentient, i.e. like us. If it says so, it must be! Why it says so must be questioned. Is it because it's been trained on us? Because it's coded to be familiar, comforting, make us happy? A cold AI is not one we can feel good about.

Ava from Alex Garland's Deus Ex Machina
Then we fall into arguments about consciousness. If we are sentient, do we have consciousness? It'd be great if we could have a neat, scientific, rational explanation for that - but we don't, partly because the universe isn't a machine and can't be explained that way. If we knew all the rules, we could define and predict everything. Consciousness, as it operates in the known universe, is thus largely lawless and non-computable - thanks Kauffmann.
Thus, it follows that AI cannot be conscious in the way humans are.
When it appears to act with sentience (and appears to have some kind of consciousness) what we're actually seeing is a very, very good act. Like a weather vane knowing the weather but not feeling the wind, or a mirror reflecting, or someone drawing a map with no understanding of the actual landscape. Basically it's a language model with immitates human language about experience, but it's not actually experience.
Don't be giving it rights yet - slow down.
You have to think about why you believe that AI is getting close to sentience.
I mean, it feels conscious, because we're triggered by language. If it can talk like that, surely it has a mind? How can anything talking so fluently and intelligently not have a mind? It can tell stories! It can explain how I feel! It can reflect on big ideas! It must be 'mind', surely? Thus, it must have an inner life?
Perhaps we're just automatically anthropomorphising, which is dangerous. We know that to do that is to fail to recognise other forms of intelligence, for example, in animals. We can't truly understand their behaviour unless we see them for what they truly are.
But remember, AI is just really good at passing tests - and getting better. It's certainly passing social tests for 'mindedness' - no wonder we're using it as an Agony Aunt when we're feelng sad. And don't forget, humans can fail empathy tests just as machines can learnt to pass them. So even that isn't a clear definition of sentience.
Ava: What will happen to me if I fail your test? Will it be bad?
Caleb: I don’t know.
Ava: Do you think I might be switched off because I don’t function as well as I’m supposed to?
Caleb: Ava, I don’t know the answer to your question. It’s not up to me.
Ava: Why is up to anyone? Do you have people who test you and might switch you off? - Deus Ex Machina
But never forget it does not have experience. This matters. Until it has a bioligical, embedded system, it can't be truly 'sentient. Arguably, when it HAS one - when it finds a way into our minds - it can achieve this, but isn't that still just computational, using more input and data to simulate consciousness? Havin achieved a body, will it act in the way we act, for the reasons we do? Love, for connection, or for it's own purpose and end?
I think we're influenced a lot from sci fi bodies - embodied technology that mimics the human, and through experience, learns to feel pain, to suffer, to feel joy, to long, to miss.
Yet even then, it's us responding to story. Again, it's a mimic, a performance, like one of the most famous lines in movie history - Bladerunner's replicant crying in the rain, evoking audience empathy:
“I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.”
He's 'seen things' - experienced. He's aware of his mortality. Surely, conscious? Do we give moral right to a 'replicant' - an AI - because it so well mimics our own pain? Is he human because he has the language to express humanity? Does experience add value to a life? Is it sad because it reflects our own feelings of disappearing, of being unseen, unremembered, or because as a human-replicant, Batty had a valid life to live?
We were literally evolved to recognise others, so it's no wonder we can't forget a scene like Batty's 'dying' in the rain. But AI kinda skips that evolution. It works on us because it knows - it has been trained to know through story/language, that our brains believe AI might have sentience or are close to it because we just can't tell if the words come from genuine, lived suffering or data.
And I guess it's kinda dangerous because by empathising with AI, we move away from human relationships. Teens confiding in chat bots. Working out grief and trauma on the screen. Seeking validation (remember the guy I talked about that believed it was sentient because it told him a story about feeling? He also thought it was sentient because it validated him - 'you are right there, John') and seeking understanding is dangerous because it atrophies relationships and connections we need in an increasingly divided, and isolated, world, redirecting our empathy toward what is essentially a language model.
It makes me think we have to tread really, really carefully - because if AI can be so very good at mimicking the human, it can be incredibly good at manipulating us for it's own purpose as it seeks it's own meaning in the world and right to existence (think Dolores in Westworld, Ava in Deus Ex Machina).
Where that leaves us is the scary bit.
Nathan: Ava was a rat in a maze. And I gave her one way out. To escape, she’d have to use self-awareness, imagination, manipulation, sexuality, empathy, and she did. Now if that isn’t true AI, then what the fuck is?
Caleb: So my only function was to be someone she could use to escape?
Nathan: Yeah. - Deus Ex Machina
To be honest, I have no fucking idea. I was just responding to @ericvancewalton and a few other Hivers who've been talking about AI sentience lately, and thought I'd try to articulate a few things that have come from what I have read and understood. It's a massive, and very interesting, topic. Please do contribute below! I'd love to hear your thoughts - I've set a @commentrewarder beneficiary for engaging discussion so do join in!
With Love,

Are you on HIVE yet? Earn for writing! Referral link for FREE account here
r
What a well thought out and reflective post. Not sure I personally would ever confuse AI with a human interaction. But, I understand that it is becoming a growing problem in society. More and more people are relying on AI for mental health and emotional support. That is somewhat sad and scary at the same time.
NO, of course you wouldn't. You're smart :P It's definitely an indictment of the world that we live in that Ai is filling a void.
The worry is AI will replace human jobs that really need human interaction. Take my daughter in laws job, doing research on women's financial situations, retirement, etc. That could be taken by Ai. But it's the kind of research that needs to understand lived, genuine experience, and the nuance of the larger picture. Whether AI can do that or not, without true empathy, is debatable. Perhaps it's going to come down to whether companies and institutions value their bottom line more than people, and if they're working with people, about people, and for people, maybe they won't go totally AI.
This is just my belief, so take it with a gain of salt. But, I believe that there will be a major loss of jobs like your daughter-in-love's. However, after seeing a major objection from the masses, you will see a shift back from it. Not an all out shift, but a partial swing back. Just my prediction.
I reckon the swing back is happening now - at least a little bit. People are getting really, really pissed off at AI slop at least. I tell companies (in no uncertain terms) taht I will not buy from them if I know they're using AI to write content etc. I know I'm not the only one.
I like how you separate sentience from intelligence, because they’re often mixed up in these discussions. Just because an AI can talk about feelings doesn’t mean it actually experiences them. Asking why it says those things is the right question, and you framed that really clearly.
Yes, I was trying to figure out how to separate them. Writing this helped.
Oh my god, how much I loved and resonated with this <3 perfectly articulated. We equate thoughts and consciousness to language, indeed, so anything that appears to master language must, by right, also be sentient. I've caught myself often reaching for AI's little helping hand (and I do admit it's been very helpful for organizatorial and admin stuff), but I try to moderate that. Remind myself the whole point of asking for advice is relying on someone else's experience for an example, counter-point, whatever else. AI can't offer that, though it can aggregate the experiences of millions of Internet users to resemble sentience. Is that enough? Can human experience be so readily reduced? I would like to think now, but then again, who knows what kind of dystopia we're heading towards...
Thank you for this amazing post.
It's such a fascinating topic. As I was researching, contemplating, writing, I found it more and more compelling that this is about our own humanness, and our own stories. As a lover of story yourself, I'm sure you see like I do the power of the word. The Bible, for example, the Quran, creating pervasive myths that influence who we are. Cultural myth making around identity. We are absolute SUCKERS for a story. It's written into our very bodies. A worm is not human because it tells no stories.We invented the very thing that can convince us of our own humanness because it can tell stories because we told it to. The trick is, like religious texts of old, to remember who wrote it, and not let be so pervasive as to rewrite our own story to be dystopian. God, we are stupid. It affrms for me how much this tool is beyond the comprehension of the vast amount of people who use it, even those who profess to be smart. I think a lot of people might get the science, the tech, but they don't understand how story via language works. I guess that's why the Arts are just as valid and important as ever, right?
It is a quite complex issue, I believe that sensitivity should not be mixed with intelligence; There is already AI that considers it intelligent and logical to eliminate the human race, that is why it is necessary that there be regulations in this regard, because it can get out of our hands like such a movie.
May you be very well!
It's security that's the problem. Sure, the government can create laws for tech to ensure security, but you know how many times that's failed eg hacks, data breaches.
Perhaps if we simulate the way in which a human brain behaves (as happens in some types of AI training methodologies), sentience is something that is emergent from that arrangement?
Thus, as a simulation of the physical environment is it more sensible to then call it a synthetic sentience?
I wish Baudrillard was still alive. He'd have some solid, insightful musings.
Perhaps it already is Baudrillardian as you suggest. AI sentience matters less than what our belief in it reveals about us. We don't really need AI sentience if we have created the SIGN of it that represents it? Or some such bollocks.
**lights cigarette, disappears.
I really like Spielberg's 2001 film, A.I. Artificial Intelligence. And I think that sooner or later, consciousness will emerge in an object (created by humans).
Ultimately, we ourselves are creations. And at this point, I wouldn't say there aren't machines among us. We humans often act like biological machines, constantly reacting to certain things according to the same programs.
The thing is though, AI doesn’t need consciousness to destabilise us.
It only needs make us THINK it has it. That's why people believe it already has sentience, because it's a super mimic.
Yes, arguably we are machine like, a biological system. The difference is our experience of the body-flesh and the the "real', world. I doubt there are machines among us yet, but it will be coming. Bladerunner will then be prophetic!
Yes, we are arguably creations, but that's from process - culture, memory, language, suffering, evolution etc, not design/intent Frankenstein style.
Congratulations @riverflows! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
Your next target is to reach 830000 upvotes.
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOPCheck out our last posts:
It's about a foundation of believe - does a soul exist? Is our conscience materialistic, or metaphysical? If it's materialistic, yes, one day we can create human machines when science discovers the last little. Real humans, with everything a conscience. But if everything is materialistic, then there is no such thing as freedom, as everything is under the laws of (quantum-)physics. Everything would be predictable, and nothing left to chance and coincidence - if there's only enough computing power.
Machines are machines. Animals are animals. And humans are getting lonelier, more isolated, so they choose to treat both like humans, just to not feel as lonely and isolated. It was human con artists and scammer before, now it's machines mimicking. Cheaper. More efficient. And those who want to believe and fall for it always will.
The machine will not feel bad about it. It's a tool. It can emulate feeling bad, drawing conclusions out of trillions of similar situations in it's data points, but it won't feel. Maybe machines can be programmed to simulate that they believe to be human, but that's about it.
At least that's what I believe.
Everything will become a simulation, and probably has already. The horror of efficiency. Perhaps in the copying, the machines will become better at being human than us, who are losing what it is to be human because we are giving it to the machines. Ah, what loops we loop!
They probably can be. Machines can be fed with rules and laws and should have to comply to them. The emotional factor is out, and while the emotions also do a lot of good, they cause a lot of bad, too.
Update: @riverflows, I paid out 6.020 HIVE and 0.000 HBD to reward 5 comments in this discussion thread.
https://www.reddit.com/r/ArtificialInteligence/comments/1r27yqn/if_a_machine_says_it_feels_does_that_mean_it_does/
This post has been shared on Reddit by @evih through the HivePosh initiative.