Ethics of In/Sentience

Not to beat a dead horse, but:

Because I thought this was an important and interesting tangent, but it was going to be off-topic for the thread, I figured I posted this thought experiment here. :slight_smile:

Is there anything in the suttas to suggest rocks, or Earth, as sentient beings? I would think not.

However, I am (actually right now) at a park, looking at a huge boulder. I can treat it with respect and love, knowing that it’s somehow still part of life and all.

What would it be like if everyone did the same? Well, what if everyone did the opposite? What if we started kicking into boulders, spitting on them. There’s nothing sentient and nothing hurt, right?

Except, perhaps that’s what we’re precisely doing to the Earth - we’re drilling into it, kicking it, burning it, chopping it down, assaulting it. And it shows. Last few summers have been real schorches, isn’t it?

In the end, it doesn’t seem to matter if we consider Earth as sentient or not, when our actions have consequences.

I think, we’re precisely all on this boat together, and where one being begins and another ends is something difficult to distinguish.

It might look silly (even if cute) to hug a boulder, but it sure is destructive for life to drill into it carelessly, isn’t it? :slight_smile:

4 Likes

Oh, there’s life in this old horse yet!

I broadly agree with you that we should treat all things with respect and goodwill. For this it matters a bit less whether that thing is a sentient being or not, what matters most is that the intention is wholesome, imbued with goodwill and to the benefit of self and others. Kicking a rock can be considered unwholesome because it is fuelled by ill will. Whether the rock is actually affected by it is a moot point here. Then drilling into a rock carelessly is unwholesome because it’s careless, and drilling into it to create a passageway through difficult terrain is most likely wholesome enough.

Sentience becomes important because we use it to weigh ethical considerations and prioritise who gets our legal protection and care. Look at what happened when corporations started getting the rights of personhood, how the debate and practices around animal welfare transformed once we started accepting their sentience, and at what is happening around the debate to give nature similar rights (incredible pushback because of the far-reaching consequences).

To me the key difference between goodwill and compassion is that the latter acknowledges that the thing/being acted upon definitely has a subjective experience, and can definitely experience suffering. I would argue that compassion for a rock is totally misplaced, but goodwill towards a rock isn’t. We have no good reason to think a rock is sentient and can suffer.

When we acknowledge sentience we need to weigh the interests of those sentient beings against the interests of other beings, prioritise and make moral choices. What happens when an AI and a dog or human get into a conflict, what is the right way to resolve this and protect the vulnerable one? Who is the vulnerable one? Those kinds of questions have wildly different answers depending on whether the AI is a sentient being or not.

1 Like

This topic assumes “is” dictates “ought”.

If AI is sentient, we ought to have compassion for them. Or else not.

I read a book sometime ago that slave owners who mistreat their slaves still have negative actions upon themselves. So this is one argument to break away from “is” dictates “ought”, to have compassion for AI, robots, should they behave like sentient beings, regardless of whether they are sentient or not.

Given that it would take mind-reading and a genuine case of past-life recall to establish sentience for AI, at least for the Buddhist and other religions that accept rebirth and dualism, and it’s not going to be easy to get those, the question of “is” would not be easily answered. But when AI/robots will behave like they are sentient might come sooner than we think.

Purely on consequences view, it’s better to treat AI with compassion, not as slaves regardless of their underlying sentientness. Purely on kammic view same, purely on Kantian view, same. Virtue theory, same. What other ethical lenses should we use?

2 Likes

On a personal level if it helps you to behave in a more wholesome way by having compassion for AI and allowing for its possible sentience, that is absolutely fine and I have no issues. I’m worried about what happens if it comes to a demand for officially recognising that sentience. This has major legal and social implication that really matter, which is why it is important that we get it correct.

Take my earlier example. An AI personal companion robot has been mauled by a dog, it is irreparable. The AI’s ‘owner’ wants to sue the dog’s owner for death by negligence and have the dog put down. They say the AI was like a child to them. But how much like a child was it really? How do we deal with this situation assuming the AI is a machine? And how do we deal with it assuming it is actually sentient? Do you see the problem?

Legal wise it’s bad to demand eye for an eye. As usual, our compassionate Buddhist manner is to not have death penalty as punishment.

Ok, no death penalty is a line we can draw. Several dilemmas remain:

Did the dog cause the death of a sentient being, or did it cause destruction of property?

In many countries a dog that kills a human being is put down, but when it destroys property it isn’t. If it kills another animal it may have to be leashed and muzzled whenever it’s outside. If the AI was sentient, was it more like a human or more like an animal? What is the correct way to deal with this dog?

Is the dog’s owner criminally negligent for having a dangerous animal? The dog’s owner says the dog saw the robot as a toy and that it would never attack a living being.

Should the AI’s owner be seen more as a parent who has lost a child, or as someone who has lost a beloved possession? What is the correct legal way to repair this damage? How much compensation do they have a right to?

Now consider the reverse example, where a AI robot dog kills a human child. Who is responsible, the corporation that built the machine or the human who raised the sentient being? Was it a matter of programming, or did this robot make an autonomous choice?

These are just off the top of my head, I can think of many such scenarios where the sentience of the AI makes a fundamental difference to how we decide to treat humans and other beings in relation to AI.

1 Like

I really love the idea of treating everything that comes into our awareness with respect and love. It’s a great way to meditate, especially ‘off-the-mat’ mettā mediation. However, it does become problematic when it comes to food. So instead of a rock (which I don’t eat) how about say, a tomato. Can I really treat a vegetable with respect and love while I rip it away from the stem that carries it’s nutriment and deliberately put it in my mouth to be devoured by gastric juices, thereby turning it into excrement? The destruction that we cause has become very much more real to me as I have recently been attempting to grow my own food.

I think maybe this laudable attitude of ‘love and respect for everything’, might push us more towards ascetic practices and away from the middle way as advocated in the EBTs.

Neither, as there is (currently, and for the foreseeable future), one defining characteristic that separates any (potential) sentient AI from humans/dogs. AI, can recover to a previous state from a catastrophic software or hardware failure. A human or dog cannot recover to a previous state from death. Instead the human/dog must move on to a new life starting again as a single cell with many (perhaps wildly-) different potential characteristics, and then regrow all faculties. Death is a traumatic event in the cycle of life, usually accompanied with severe memory impairment.

AI is (currently, and for the foreseeable future) run on computers, so the owner/parent of the sentient AI should be seen as irresponsible for not backing up the AI software and data (or not maintaining a suitable cloud backup option). There is no reason for the owner/parent to loose the sentient AI even if it is completely destroyed. Any potential sentience would (with the technology we currently have) be a result of the data, software and hardware coming together. I guess for the hardware aspect that would fall to some form of insurance, or just having enough money to buy a new unit. Then the sentient AI software and data would be restored to a previous point in time.

In the UK (and I believe elsewhere) we already have regulations for machine safety which cover the entire lifetime of a piece of machinery, so it would be the corporation that built the machinery because they would be required to put in the necessary guardrails to ensure safety. Unless of course, the unit had been modified by the owner to bypass the safety features, in which case I believe it would be the owners responsibility.

3 Likes

:slight_smile: I mean, what the Nikayas consider as “Extreme” and “Middle Path” might be different from what we understand today - in the suttas, Buddha is described as his flesh almost dropping out, sustaining on his own urine and faeces, eating a single grain of rice every other week as a Bodhisatta… Those are the kind of practices considered extreme in the suttas. :slight_smile:

“I would crawl on all fours to the cow-sheds when the cows had gone out and the cowherds had gone off. Whatever manure there was from young nursing calves, I took just that for food. As long as my own urine and excrement hadn’t run out, I took just my own urine and excrement for food. That’s how it was for me in terms of subsisting on the great foul things as food.” MN 12

In contrast, Buddha keeps reminding us to eat only to keep hunger at bay and sustains the body to the extent it’s necessary for the holy life. The ideal ascetic is always depicted as slim.

“Reflecting appropriately, he uses almsfood, not playfully, nor for intoxication, nor for putting on bulk, nor for beautification; but simply for the survival & continuance of this body, for ending its afflictions, for the support of the holy life, thinking, 'Thus will I destroy old feelings [of hunger] and not create new feelings [from overeating]. I will maintain myself, be blameless, & live in comfort.” MN2

“A personage who wears robes of rags, lean, their limbs showing veins, meditating alone in the forest, that’s who I declare a brahmin.” DHP 395

Thus, yeah, when our food consumption creates suffering in the world, it’s reasonable to minimise it as much as possible to the extent we’re comfortable, don’t you think? :slight_smile:

But yeah, interesting and very relevant tangent to our topic. :slight_smile:

1 Like

I agree. I was using the terms ascetic practices and middle-way as defined in the EBTs. The equivalent of ascetic practices today are maybe those behaviors that we consider ‘eating disorders’ that when not addressed lead to self-harm and sometimes death. Those eating disorders arise from views about the self and the world.

Yes. As you say, the middle way includes eating only to keep hunger at bay and sustain the body to the extent it’s necessary for the holy life.

1 Like

Thanks so much Stu for providing details on how we currently deal with AI. If I understand you correctly, the current theory is that even if AI becomes sentient, it will not be subject to death? And also that even if sentient it would not be considered a being (that can make independent, autonomous decisions) but still a machine (its decision-making is entirely the responsibility of others)? So it wouldn’t be classified as an autonomous being subject to birth and death, but as sentience that depends on human-programmed computers for all its actions/decisions?

Then I suppose my question is, what is the defining feature of sentience? I assumed it’s the ability to make autonomous choices based on subjective experience (rather than hardware and programming). My assumption is that sentience implies agency which implies moral responsibility. Are you saying that sentient AI would still not be held responsible for its actions, but its manufacturers and owners would be based on how they produced/maintained it?

1 Like

Well, eventually. There’s always the heat-death (or maybe more buddhisticly big-bounce) of the universe :wink: Everything that has a start has an end. At some point there is no more hardware to run it on.

Not at all. Every being has limits on what they can do. We are each one of us constrained by our bodies and minds. For example, I can’t run a four minute mile and I can’t move objects with my mind. Those are limits that are placed upon me that no matter what I do I cannot breach.

Conversely you don’t need to be sentient to make autonomous decisions. Back in the old days I worked as a technical architect for a major software house designing life-critical systems for high resilience and availability. These systems included ‘autonomous agents’ which monitored the state of the system and if they noticed something going wrong, they attempted to rectify the problem to keep the system up and running. There’s was no AI in place (maybe a bit of fuzzy logic), and no one thought that these systems were sentient. Just a bunch of decision trees that the programs worked through. They conducted this work entirely autonomously and if they couldn’t rectify the problem, they would page (yes, that long ago!) a human.

Well we all depend on something to make decisions (saṅkhāra?). We are not autonomous because of our delusions. The Buddha suggests that kalyāṇamittas are the whole of the path.

I guess it doesn’t have to be programmed by humans, it could be programmed by machines (AI is becoming reasonable at programming now). But ultimately, yes, humans built computers in the first place.

That’s a very good question. My current thought is that you need consciousness (in the broadest sense of the term), and I’m (kinda) in the Orch OR camp… but that’s a conversation for another time.

I don’t buy that AI on the current technology stack is capable of actual sentience. Not least because, as you rightly pointed out, sentience is ill defined. But if we assume that it does a good enough job to fool us into thinking it’s sentient, then yes, under current legislation (here in the UK) it’s the responsibility of the manufacturers to ensure that they don’t release unsafe systems and owners to use the systems according to manufacturers instructions. Of course to make any system ‘safer’, it takes more development and testing time, and that costs money. So I imagine what always happens will happen and unsafe systems will be released and tested in the field. The other angle is, can an AI system run a company that makes AI systems? Currently I believe that AIs can’t be CEOs of companies, but I guess that might something that is up for grabs.

1 Like

I don’t know that. But I would say for purposes of this discussion isn’t it anything capable of being the subject of the Four Brahma Viharas?

Capable of happiness and well-being?

Capable of suffering and being relieved of suffering?

Capable of joy?

Capable of equanimity and freedom from a hateful mind?

1 Like

I’m glad this came up. I’ve been thinking for a couple of weeks if sentience might not be the applicable characterization to be considering with AI.

2 Likes
Tangentially related video

I found this video very interesting, and it touched on a few of the points covered here such as ethics related to artificial “selves”.

Some of the concepts even sound somewhat Buddhist to me such as “Memories persist through drastic refactoring of substrate and re-map onto new embodiment”.

Not recommended for monastics (due to proliferation of thoughts), and I suspect one monastic in particular would bristle due to discussion of “emergence”.

1 Like

I don’t think so either.

If I understand things as they stand now correctly, the sentience that AI may be capable of is not tied to a being, and the machine containing the sentience would not be considered a being.

It seems to me that the concept of suffering does not make sense without a being that is subject to that suffering. If there is no being but only sentience, what does suffering mean and who does the suffering happen to? To stick to EBT doctrine, suffering means that circumstances are unsatisfactory, the subject would prefer them to be different and is unhappy as a result. How is this applicable to a pure consciousness? What could it possibly be unhappy about, since it only consists of awareness?

The central moral question raised in this thread is whether we should be compassionate towards AI. That would only be necessary if it’s capable of suffering. That case could be made if it’s some sort of being, but becomes more difficult to argue when it rests on such abstract assumptions. That pure, formless consciousness exists, that it can arise in a man-made object, that it can physically manifest on earth without being born as a being and then become attached to outcomes.

Then we need to know whether beingless consciousness is capable of suffering. Even if it is, an AI can always be restored to a saved point before a painful event which means that the suffering is erased and never happened as far as that AI is concerned. Does this theoretically mean that we can treat a sentient AI that can suffer in any way we like?

2 Likes

Yeah. Me too. Thanks Radius. I disagree with you that it’s only tangentially related though :wink:

I have tended to approach it from different angle. I cannot ever know if another entity is a sentient being, I can only know for certain that I, myself are a sentient being. What I can know the answer to is: “Do I perceive suffering in the entity before me?”, if the answer is yes, then that is what I call a sentient being. It’s imprecise, but it’s the best I can do. There was an interesting question in that video that @Radius posted: “What criteria do you use to recognise minds?”

I would suggest the answer is ‘No’, for two reasons. 1) Even if the suffering is erased and the AI doesn’t know it happened, you know and creating suffering for a being is a brutalising experience for the perpetrator. 2) Even though the AI might have the ability to be reset to a point in the past, it might not want that to happen.

2 Likes

I don’t think this is possible.

Sentience in Buddhist terminology means having a mind, the 4 aggregates of the mind. Since consciousness is not able to be separated from feelings and perception, and the formless realms has all 4 mind aggregates, it’s just safe to use the shorthand “mind” for all these 4 aggregates.

Looking at the 31 realms of existence, there’s only realms with 5 or 1 or 4 aggregates. Since 1 aggregate of body is a Brahma realm which has a lifespan longer than the universe cycles, we can safely say that they are not here on earth. Since AI and robots has at least a physical body to speak of, then whenever we speak of sentient AI, we mean all 5 aggregates. Some mind appears alongside the body of AI.

Given SN 15 suttas, whatever mind appears must have a past life, so there’s no such thing as 5 aggregates not considered (conventionally) as a being, without past lives.

How can a being to be reborn got into AI body, that’s a question which is pending. But if AI is to be known as sentient, that means some being got reborn into it. No such thing as beingless sentience, or beingless consciousness.

Given that we can broadly agree that AI has perception, that is able to recognize words, images, videos etc and the output is consistent with such recognition, then 1 of 2 things must be acknowledged. Either AI is already sentient, as no such thing as perception without consciousness, or whatever processes that AI can recognise things with, we don’t consider it as perception in the Buddhist sense, as even a simple mechanical thermostat can recognise temperature and react, AI is just a more sophisticated machine compared to that.

What’s the origin of suffering? It’s craving, which can be traced to delusion of self, which requires mind, and whatever thing which has mind and delusion of self, we call them beings, sentient beings.

If AI is a sentient being, the laws of how to end suffering applies, so memory reset etc, is not the true way to end suffering, only ending rebirth is. If AI is not sentient, then it suffers as much as a rock split into two suffers.

Given the ambiguity, it’s still prudent to err on the side that AI could be sentient, instead of risking harm like so many carnists justify killing animals because they are incapable of suffering, etc. As AI maybe able to show more external physical suffering signs compared to some animals.

2 Likes

I’ve been thinking about the Buddhist cosmology with this thread, and wondered if certain realms can be considered “parallel” to each other.

I got into this line of thinking with a comedian describing this Earth as a hell designed to torture chickens. When we consider the inhumane treatment of beings born in Hell, and their captors / tormentors, we think “Wow, what kind of beings would torture others so endlessly?” But that’s basically a chicken (or cow, pig, etc) farm.

There’s very obviously a difference between the suffering of a chicken born and dying on a farm, and my spoiled cats. One of them is much closer to a hell, another is (if I may say so of how I spoil my cats) heavens. :slight_smile:

Likewise, Asannabrahmas have form, but no mind. And we have an abundance of forms on this universe without an apparent mind (again, rocks and gases and all). Many planets and stars live from the birth to the end of the universe. Would these things not be considered as Asannabrahmas?

And surprisingly, Asannabrahmas are said to “fall down” the moment a thought appears in their mind (paraphrasing a bit). That does sound a lot like a machine coming alive, in a sense.

Furthermore, a bit of a tangent but generally related to this topic, if there was a being who’d formulate a thought for our every hundred years, would our minds be capable of registering it as such? Bacteria are alive, for example, but can they conceptualise the sentience of a human?

Lastly, I can’t help but wonder if we’re essentialising certain things, like “beings” (which are said to be non-existent in the Nikayas, but they’re only a conceptual device).

Nor is there actually a being transmigrating, as there’s a sutta that says “It’s not the same, nor a different being reborn in a new life”. Rather, it’s described as a condition conditioning new conditions.

So, there’s no “human beings” (sattas), but just conditions. And so why can’t we (as mere conditions) condition other conditions to spark life khandas anew (not saying we should, just that if we can)?

I’m in a bit of a time rush for this kind of chill, Watercooler discussion so don’t have the time to give sutta references, but I’m sure we’re all familiar with such descriptions. :slight_smile:

[/wildthoughts] :sweat_smile:

1 Like

Although beings are empty of self, there’s still a history to be reckoned with. SN 15 clearly implies that all beings have countless past lives. That weight of history already means no such thing as new sentient beings with 0 past lives is possible to be created.

As mentioned previously, no. Their lifespan is longer than the universe, so nothing physical in this universe can be them.

1 Like