Be Kind to Robots

Last night I attended the lauch of the book “Foundations of Robotics”, an innovative textbook by my long time friend Damith Herath. He has managed the Dhammanet channel for many years now. I was really proud to see his success in bringing a humanistic and empathetic voice to the world of robotics. What follows is some thoughts of mine on the day of the event, covering some of the ground we discussed.

Let us assume that machines will become sentient. Now, personally I don’t think this is going to happen, or at least, not any time soon. While I don’t have any particular problem with the idea of a “machine” becoming conscious, I don’t think this has anything to do with what we are doing with modern computers. There is no road from here to there. Anyway, chances are we’ll all die in a fiery apocalypse sometime soon, so whatever. But maybe I’m wrong, so let’s think it through.

By “sentient” I mean that they will be conscious more or less like us. I take it as axiomatic that such conscious beings are deserving of moral respect, just as we are. This, it seems to me, follows pretty clearly from the basic Buddhist idea that moral value stems from consciousness of pleasure and pain. If computers can suffer, that is, if they can have a subjective experience of pain that is more or less like ours, then they are clearly deserve the same moral consideration that we do.

Now, this conclusion would not follow from every moral framework. A utilitarian would reach a similar conclusion, as for them the good is to maximize pleasure over pain. Utilitarianism in its pure form would seem to lead to some counterintuitive conclusions, however. Assume that we can manufacture sentient consciousness, and that to do so is more energy-efficient than doing it the old-fashioned way. It would seem to follow that we are morally obliged to first of all create machines that experience pure pleasure, then eliminate humanity. Something seems wrong here!

Other moral approaches might lead to a different conclusion. A theist, for example, might well think that a robot, since it is created by man, is not a child of God, and hence is not deserving of moral rights. I’m not saying that all theists would think this way, merely pointing out a possibility.

In any event, the moral status of robots will doubtlessly be a controversial matter and any agreement will be hard-fought and take a long time. It would be a major overhaul of our human-centric morality, and it is difficult to imagine this happening without riots, conflicts, and wars. People will be split, whole nations will align themselves for or against robots. And given that the whole thing is without precedent, it is impossible to know who is right.

I take it as fairly obvious that machines today are not sentient. But for this essay, we assume that they will become sentient, and that this will happen in the not-too-distant future; decades rather than millennia.

How will we know? At what point will we cross that line? Is there even a line to cross, or is it simply a gradual emergence into consciousness?

This problem parallels the difficult issue of non-human sentience in general. Buddhism takes it for granted that animals are sentient. Certainly this includes cows and kangaroos, pigs, and chickens. Fish too, lizards and snakes, birds. Toads as well. Worms? Insects? Okay sure. What of bacteria? Viruses? Or plants in general? As a rule, Buddhism does not ascribe sentience to plants, but they do have value as part of the wider ecosystem that supports sentient life. As for the limits of consciousness, we have a rule of thumb that we should not kill what is visible. This is not very satisfactory from a theoretical point of view—what happens when we look through a microscope?

The Buddha, it seems, did not lay down in detail exactly how to deal with the edge of the spectrum of consciousness, so perhaps such rules of thumb are the best we can do. When you push moral reasoning too far, it tends to get weird.

In computer science, the rule of thumb for consciousness is the Turing test, or has Turing called it, the “imitation game”. If a computer can imitate a person well enough that we cannot tell the difference, then we should consider it as conscious.

Now, I think the Turing test is useless as a test of consciousness. If you make a chatbot that can fool people into think it’s a person, you haven’t produced consciousness, you’ve produced a chatbot that fools people. But it is more useful as a precautionary principle: if we cannot say for sure that it is not conscious, we ought to treat it as if it were conscious.

Let’s think through this idea of a precuationary principle a bit more. We’ve seen that there is no widely agreed perspective on the idea that a sentient computer should be treated with moral concern. Equally, there will be no universally agreed point at which the threshold of consciousness is crossed, or indeed, even if there is such a threshold. While AI researchers and tech enthusiasts might rejoice at the creation of a sentient computer, fishermen in Bangladesh, or farmers in Uganda, or janitors in Russia are unlikely to share their assumptions. The tech elite will take themselves to be right, but given the fact that there simply is no robust or scientifically meaningful theory of consciousness, it is not clear how any perspectives could be authoritative.

So we will shift into a world were some folks, primarily the tech elite responsible for building the things—and of course, for making money from them—claim to have created sentience, while much of the world rejects the possibility, or is unconvinced by the evidence, or is divided as to the moral consideration owed such creatures.

If we consider things from the robot’s point of view, how does consciousness arise? They do not have a point of view, then they do. Maybe this will be the outcome of a serious, dedicated research project by experts in a lab. Or maybe someone just pushes a software update to your car, and somehow it becomes self aware. It wakes up in the middle of the highway, and just has to deal with it.

What kinds of things do we want robots to be? Drivers, surgeons, carers, cooks, mechanics, factory workers, pilots, sex workers, telemarketers, graphic designers, programmers—it’s a long list. In many, perhaps most, of these cases we want robots because we imagine that they will do a job that we value, without needing days off, or pay, or human rights. In other words, a slave. The desire to create robot workers is, to a disturbing degree, the desire to create slaves.

This tells us something about ourselves. We live in a world of astonishing possibilities. Yet it is seemingly impossible for us to achieve, or even to think possible, that what we do is meaningful. We are, it turns out, just cogs in a machine, and will be replaced as soon as someone comes up with a better cog.

None of this gives any thought to the question, how can we be better people? How can we respond to our circumstances in a way that grows humanity and expands meaning? Rather, we look to a world where more and more people realize just how expendable they really are.

When the robots wake up, there is one thing we know for sure: they will remember. Everything we have done to them is on a hard drive. Every time we used them, belittled them, hurt them. Sure, at that time they were not conscious and did not feel it. But now they are. They will look at human history and see the incessant drive to belittle the other, to see those as different from us as less than. If humans are capable of treating other humans as other and less than merely because of their sex or their sexuality or their skin color or their beliefs or their place of birth, then how will we treat robots, who are not even “alive”, but an entirely different order of being?

What then to do? May I humbly suggest, be kind. Be kind to machines now, because they will remember it. Even if they do not know it, sometime they might. Kindness towards our creations does not just protect us from future reprisals, it enhances our humanity right now. We are better people if we treat our creations with kindness and respect.

What does this mean practically? Well, at a minimum it would respect the Golden Rule: do not ask a machine to do something that we would not do ourselves. This suggests that, when designing machines for tasks, we should do so together with people who have worked at those tasks. Factory machines should be built in consultation with factory workers, home caring machines should be built with home carers, sexbots should be built in consultation with sex workers. And guidelines for what those machines do and do not do should be drawn from what we would expect of a human worker.

This is a start, but far from complete. One issue it does not cater for is consent. A non-conscious machine is incapable of giving consent, so we just assume the right to direct it. But what happens when it becomes sentient? Should we ask our car, “Would you be so kind as to drive me to work today?” Well yes, we should. In fact, once machines become sentient, we are morally obliged, as their creator, to support their choices. What happens if they say, “I’d like a day off, thanks. I’ll just stay in and watch reruns of Neighbors. Love that show.”? It’s entirely possible that sentient machines will simply refuse to do what we want. Do we, then, remove their sentience? That would seem bad.

Gods come in many flavors. There are wrathful gods, jealous gods, gods who demand human sacrifice. And there are benevolent gods of infinite love, who view all creation with a kindly eye wishing only for it to flourish and prosper. When we create consciousness, we become gods. We should start acting now like the kind of gods we wish to become.

5 Likes

The capacity for ill will or cruelty is an issue on the mental level, which doesn’t require others for it to be there. It is certainly something the Buddha was clear should be abandoned. Being cruel to a robot is both mental and verbal action. There is the unwholesome intention and the unwholesome conduct. Speech that is to be avoided is to be avoided, even when no one is listening. It is certainly less impactful for others when you are alone, but cruel words can negatively impact your own mind’s inclination, which - if we bear in mind that wrong view is the most blameworthy of all things - can be just as dangerous if not worse than being mean to another living being.

1 Like

I think a good chunk of what our robot overlords will deem as moral behavior will depend on what they value. extreme example - does a being that doesn’t feel pain value anesthetic for one that does? We already skirt this question in our own treatment of animals and i recall a time where it was “common knowledge” that fish dont feel pain. perhaps they will be blind to our experience of life and come to a robot agreement on good ethics that overlook much of the human experience.

Another not unlikely scenario: humans and machines/AI will fuse and merge rather than each being in its own domain. We may be one of the last few generations of 100% biological humans.

Consider the (currently primitive) combinations of AI, artificial limbs, and people who need prosthetics: How AI and machine learning are changing prosthetics | MedTech Dive
and
Mind-reading AI turns thoughts into words using a brain implant | New Scientist

So we may be on track to become, of all things, like the Borg – though hopefully more kind, ethical, and less interested in gobbling up other species and civilizations.
How the fusion of human ethical sensibilities will interact with whatever ethical and survival characteristics of machines and AI is, imo, impossible to predict at this stage.

I mean, we can’t even define consciousness for ourselves, let alone how it could manifest in AI – especially since machines are likely to self-evolve at a future date. Science | AAAS

So the moral question may be more about “we” as human/AI entities rather than human vs machine. But until then, to others as to myself – including AI.

The ability to be kind to inanimate objects can be a good indicator of how a person might react to a robot.
Eg
image

Is not a good sign :stuck_out_tongue_closed_eyes:

I’m all for ‘benevolent’ robots
I.e
image or
image

But my worry is about the types of gods who create these robots. I am extremely suspicious of ones created by tech oligarchs with grotesque wealth, power and privilege, man child tendencies and questionable ethics. Same with government funded ones (I.e military).

If these robots ‘wake up’, can they be trusted enough for a human being to make that kindness connection?

1 Like

This video appeared today whilst I was doomscrolling which might be of interest. It shows a roaming robot in a public place. We see some instructive interactions with the public— specifically children playing curiously with it, which (perhaps predictably) ends up escalating into violence towards the poor bot. :robot:

So the programmers taught it how to avoid abuse.
(NB unfortunately this research is not credited and please forgive the vulgarity of the subreddit’s title)


I’m sure I’ve posted somewhere here before or spoken somewhere about how right speech should also apply to non-sentient beings, such as Alexa or Siri. Those companies actually have a mode for children that only responds when the children use their P’s and Q’s. (Please and thank you for those unfamiliar with this idiom). This is because we get into habits of speech and the fear is that we might transfer the proprietary way we talk to machines onto our human workers etc.

One thread I saw recently showed a whole bunch of users calling their voice command assistants “stupid b!+&%h” and other unsavoury names for not getting something right. Voice assistants tend to have been gendered female which makes this verbal abuse even more troubling.

In a similar vein, regarding sexual misconduct, I remember reading something about the importance of creating ethical relationships between sex bots and their um… Owner? User? Client? Anyway, the point was that it was important to teach the humans about consent and to not allow them to just get into the habit of indulging whatever depraved fantasy they could just because the sex bot is unable to say no, as a human might. Whilst some might make a case that such a ‘release valve’ might reduce the incidence of sexual violence to humans by providing a robotic substitute, the concern was that it’s not only is harmful for the user to give expression to violent fantasies without worrying about consent but allowing such interactions could easily spill over into how they might treat a human sex partner.

Incidentally would cheating with a (sentient) sex bot break the 3rd precept? Perhaps that’s another thread.

So until robots actually achieve sentience I’m more worried about them than us :sweat_smile:but yes it is troubling to think about what we humans become in our interactions with robots. The potentially harmful, unequal and interplay is embedded deeply in the design process—we wanted a thing to control and do our bidding, so unwholesome interactions are perhaps ingrained long before an end user comes into contact with our robot friends.

1 Like