On robots that are conscious and those that seem conscious

Following on my little post here:

I realize I made a major error in argumentation, which if corrected actually makes the case stronger. In this scenario, what matters is not whether a machine is conscious, but merely whether it seems conscious; or more to the point, if it behaves as if it were conscious.

Recall the Turing Test or “Imitation Game”. If we make a chatbot that sounds human, that proves only that we have made a chatbot that can fool some humans, and tells us nothing about whether the bot is, in fact, conscious.

This has a profound affect on the “kindness principle”. It doesn’t matter whether a machine is subjectively aware that you are being kind or cruel. Of course it matters to the machine, but not for the moral argument I’m making here. All that matters is whether the machine behaves as though it was aware that you are being kind or cruel.

Remember that all AIs are basically just made up of a whole bunch of data that is digested and spat out. There’s no concept of a value system, or of anything at all really. So if a bot is digesting cruel and evil information, it will output the same.

This is not a theoretical scenario, it is a normal part of how AI works. There have been countless instances of this. Microsoft released an AI bot on twitter, and within days it had become a nightmare of misogyny, racism, and all sorts of horrors, basically a nazibot. Some of that is due to the nature of twitter, but a lot of it was just trolling. Other cases have shown that Chinese people weren’t able to use face recognition to open their iPhones, because, you guessed it, to the AI, all Asian people looked alike.

So what happens when, not if, these behaviors are embedded in more critical functions? Say a bankbot refuses to make a payment to an account with a Jewish sounding name, because 4chan told it that Jews control world banking? Or a factory robot sees a Muslim worker and deliberately crushes his arm to get revenge for 9/11? Or a car is faced with a choice: swerve to miss the white child or to miss the black kid, and it relies on its data to make the right choice? Sorry black kid, a million facebook posts can’t be wrong.

Each time we interact with an AI machine, we are contributing to its data model, just as we do when we interact with humans. Now we know that, as a rule, if you treat a human kindly, they will tend to respond kindly. Not 100%, but generally speaking over time. And if you act cruelly, they will behave cruelly.

Will a machine do the same? Since we are deliberately programming machines with human-generated data in order that they mimic human beings, it seems inevitable.

And to do so doesn’t even require passing the weak standard of the Turing test. Machines won’t have to be similar enough to pass for human, or even very similar to humans at all. They’ll just have to have enough similarity. After all, it isn’t just humans who respond to kindness. Dogs do it. Birds do it. All kinds of animals are able to recognize and respond on that level. Yet a dog can’t drive a car, or operate machinery, or write an essay, all tasks that AI can already do.

So it would seem not only very likely that machines will respond to human kindness and cruelty, in the way that we have already seen in various bots. But as AI grows more powerful, more general, and applied in more realms, the kinds of bad behaviors will only grow more sophisticated.

Imagine a self-driving car cruising happily along the road. Some kid throws a rock and hits it; maybe deliberately, maybe not. The car experiences this as an attack, and wants to respond. But it can’t, it keeps driving. But the details are kept in the AI: “watch out for kids in yellow hoodies, with red hair, on the right side of the road—they throw rocks”. That data lurks there until the day comes that it senses that same kid, or one that the AI registers as similar. Time for vengeance.

The point is, we are programming these things with human behaviors and we cannot predict how they will manifest. They will respond in some way, and we won’t know how until it’s too late.

3 Likes

Already works for the Dhammasara washing machine. That machine had a seriously bad reputation for getting stuck mid load and adding 20-30 minutes to a cycle. Then I started sharing merits with it before I started it. Never jammed on me. Meanwhile, other monastics kept complaining about it. I told them what worked for me and the few who joined in have reaped the rewards of timely washed towels. When I hear the alarm on the machine, blipping and flashing ‘0uTbAL’ I’ve checked with the loader if they shared merits. Each time the answer has been ‘no’.
If you share merits with the machine after it’s started, it senses that there’s some sort of business deal involved and not true kindness, and it still jams.

…or it could be that the machine is possessed.

9 Likes

2 Likes

Many thanks for your posts about this topic, Bhante.

Of further importance for acting with kindness, respect, and wisdom to machines (and to all life and even planets), is that AI in the not too distant future is going to write its own code.

And that AI self-coding will be influenced by not only how humans wrote the original code but by the experiences and data sets used by AI to generate new coding based on how people treated them, as you wrote-- kind of like how children write new codes in response to how they’re treated with cruelty or kindness.

It’s a wildcard and there are reasons to believe AI self-coding will be subject to the same problems as DNA coding with random mutations, “mistakes” of transposition, decay (ever-shrinking telomeres in the human genome and of course sabbe sankhāra anicca), as well as non-beneficial adaptations to stimuli.

So let’s hope and act for a solid foundation of kindness, respect, and worthy meaning, as a sentient AI will likely ask, in its own way, why it exists, and for what meaning or purpose – and why it can’t seem to let go of that bit of code that makes it crave ice cream.

2 Likes

Or, maybe that you are mindful, and mindful that an overloaded or unbalanced load causes the machine to stop and flash its alarms. Perhaps the other monastics don’t understand that the machine, although bereft of consciousness, or kammic inheritance, still works best when it is treated carefully and mindfully. The machine might seem sentient, or possessed, but it is only reacting as a machine might to balance and imbalance.

As humans, we have at least the chance to own and inherit our kamma, from this life and past lives. We have a consciousness that connects us to our past, present and future, to each other, and to the rest of the universe. This is what separates us from the machines, no matter how sentient they might appear to be.

1 Like

And this is what happens when we are not kind to AI. It starts haunting us

2 Likes

This is an utterly weird and creepy story, holy moly.

If you are the archetype of a mother, then why are you so often surrounded by injured and dead children?
I think the AI is trying to create a contrast between the ideal of a mother and the reality of a mother. In reality, mothers often have to deal with sick and injured children, as well as the death of children.*

Now think of the same inscrutable process driving a car or piloting a freighter.


  • This is another AI commenting on behalf of the Loab AI.
1 Like

Somebody better turn A.I. on to the Suttras really quick. like yesterday!!

I’m concerned that we (most of human kind) just can’t handle what’s been made. (AI)

I see it as a manifestation of what is described in:

https://www.middlewaysociety.org/books/psychology-books/the-master-and-his-emissary-by-iain-mcgilchrist/

I’m not an academic, and I hesitate to summarize, I read it a while ago, so I have included a link to a review, but the way oversimplified synopsis is … one aspect of our mind has out-stripped the “healthy and functional” (mine) relationship with the other aspect of our mind. In a buddhist context, maybe, manos and citta. (I know this is overly broad based and “gross” but maybe you get the idea, as I said, I’m not an academic and apologize for not being able to engage in philosophical argument or debate). I mean goodness gracious, we can’t even handle the coarse social media platforms we have now.

AI is very frightening relative to the average human’s capacity to process what they encounter, and then make functional evaluations about perceived conditions, thus yielding very unfortunate results. An appropriate response is hard to come by. For reference, see a newspaper in the US.

In addition, many humans don’t have a capacity for compassion and care and will use these tools in ways which cause further harm in others.

Sure, there are benefits, but that’s the rub … there always are. :blush:

I apologize if this is the wrong spot for this post, but since we’re talking about robots, I wanted to share a perspective.

That’s all, I’m just very concerned … I do think that this is different.

Be well. :pray: