Last night I attended the lauch of the book “Foundations of Robotics”, an innovative textbook by my long time friend Damith Herath. He has managed the Dhammanet channel for many years now. I was really proud to see his success in bringing a humanistic and empathetic voice to the world of robotics. What follows is some thoughts of mine on the day of the event, covering some of the ground we discussed.
Let us assume that machines will become sentient. Now, personally I don’t think this is going to happen, or at least, not any time soon. While I don’t have any particular problem with the idea of a “machine” becoming conscious, I don’t think this has anything to do with what we are doing with modern computers. There is no road from here to there. Anyway, chances are we’ll all die in a fiery apocalypse sometime soon, so whatever. But maybe I’m wrong, so let’s think it through.
By “sentient” I mean that they will be conscious more or less like us. I take it as axiomatic that such conscious beings are deserving of moral respect, just as we are. This, it seems to me, follows pretty clearly from the basic Buddhist idea that moral value stems from consciousness of pleasure and pain. If computers can suffer, that is, if they can have a subjective experience of pain that is more or less like ours, then they are clearly deserve the same moral consideration that we do.
Now, this conclusion would not follow from every moral framework. A utilitarian would reach a similar conclusion, as for them the good is to maximize pleasure over pain. Utilitarianism in its pure form would seem to lead to some counterintuitive conclusions, however. Assume that we can manufacture sentient consciousness, and that to do so is more energy-efficient than doing it the old-fashioned way. It would seem to follow that we are morally obliged to first of all create machines that experience pure pleasure, then eliminate humanity. Something seems wrong here!
Other moral approaches might lead to a different conclusion. A theist, for example, might well think that a robot, since it is created by man, is not a child of God, and hence is not deserving of moral rights. I’m not saying that all theists would think this way, merely pointing out a possibility.
In any event, the moral status of robots will doubtlessly be a controversial matter and any agreement will be hard-fought and take a long time. It would be a major overhaul of our human-centric morality, and it is difficult to imagine this happening without riots, conflicts, and wars. People will be split, whole nations will align themselves for or against robots. And given that the whole thing is without precedent, it is impossible to know who is right.
I take it as fairly obvious that machines today are not sentient. But for this essay, we assume that they will become sentient, and that this will happen in the not-too-distant future; decades rather than millennia.
How will we know? At what point will we cross that line? Is there even a line to cross, or is it simply a gradual emergence into consciousness?
This problem parallels the difficult issue of non-human sentience in general. Buddhism takes it for granted that animals are sentient. Certainly this includes cows and kangaroos, pigs, and chickens. Fish too, lizards and snakes, birds. Toads as well. Worms? Insects? Okay sure. What of bacteria? Viruses? Or plants in general? As a rule, Buddhism does not ascribe sentience to plants, but they do have value as part of the wider ecosystem that supports sentient life. As for the limits of consciousness, we have a rule of thumb that we should not kill what is visible. This is not very satisfactory from a theoretical point of view—what happens when we look through a microscope?
The Buddha, it seems, did not lay down in detail exactly how to deal with the edge of the spectrum of consciousness, so perhaps such rules of thumb are the best we can do. When you push moral reasoning too far, it tends to get weird.
In computer science, the rule of thumb for consciousness is the Turing test, or has Turing called it, the “imitation game”. If a computer can imitate a person well enough that we cannot tell the difference, then we should consider it as conscious.
Now, I think the Turing test is useless as a test of consciousness. If you make a chatbot that can fool people into think it’s a person, you haven’t produced consciousness, you’ve produced a chatbot that fools people. But it is more useful as a precautionary principle: if we cannot say for sure that it is not conscious, we ought to treat it as if it were conscious.
Let’s think through this idea of a precuationary principle a bit more. We’ve seen that there is no widely agreed perspective on the idea that a sentient computer should be treated with moral concern. Equally, there will be no universally agreed point at which the threshold of consciousness is crossed, or indeed, even if there is such a threshold. While AI researchers and tech enthusiasts might rejoice at the creation of a sentient computer, fishermen in Bangladesh, or farmers in Uganda, or janitors in Russia are unlikely to share their assumptions. The tech elite will take themselves to be right, but given the fact that there simply is no robust or scientifically meaningful theory of consciousness, it is not clear how any perspectives could be authoritative.
So we will shift into a world were some folks, primarily the tech elite responsible for building the things—and of course, for making money from them—claim to have created sentience, while much of the world rejects the possibility, or is unconvinced by the evidence, or is divided as to the moral consideration owed such creatures.
If we consider things from the robot’s point of view, how does consciousness arise? They do not have a point of view, then they do. Maybe this will be the outcome of a serious, dedicated research project by experts in a lab. Or maybe someone just pushes a software update to your car, and somehow it becomes self aware. It wakes up in the middle of the highway, and just has to deal with it.
What kinds of things do we want robots to be? Drivers, surgeons, carers, cooks, mechanics, factory workers, pilots, sex workers, telemarketers, graphic designers, programmers—it’s a long list. In many, perhaps most, of these cases we want robots because we imagine that they will do a job that we value, without needing days off, or pay, or human rights. In other words, a slave. The desire to create robot workers is, to a disturbing degree, the desire to create slaves.
This tells us something about ourselves. We live in a world of astonishing possibilities. Yet it is seemingly impossible for us to achieve, or even to think possible, that what we do is meaningful. We are, it turns out, just cogs in a machine, and will be replaced as soon as someone comes up with a better cog.
None of this gives any thought to the question, how can we be better people? How can we respond to our circumstances in a way that grows humanity and expands meaning? Rather, we look to a world where more and more people realize just how expendable they really are.
When the robots wake up, there is one thing we know for sure: they will remember. Everything we have done to them is on a hard drive. Every time we used them, belittled them, hurt them. Sure, at that time they were not conscious and did not feel it. But now they are. They will look at human history and see the incessant drive to belittle the other, to see those as different from us as less than. If humans are capable of treating other humans as other and less than merely because of their sex or their sexuality or their skin color or their beliefs or their place of birth, then how will we treat robots, who are not even “alive”, but an entirely different order of being?
What then to do? May I humbly suggest, be kind. Be kind to machines now, because they will remember it. Even if they do not know it, sometime they might. Kindness towards our creations does not just protect us from future reprisals, it enhances our humanity right now. We are better people if we treat our creations with kindness and respect.
What does this mean practically? Well, at a minimum it would respect the Golden Rule: do not ask a machine to do something that we would not do ourselves. This suggests that, when designing machines for tasks, we should do so together with people who have worked at those tasks. Factory machines should be built in consultation with factory workers, home caring machines should be built with home carers, sexbots should be built in consultation with sex workers. And guidelines for what those machines do and do not do should be drawn from what we would expect of a human worker.
This is a start, but far from complete. One issue it does not cater for is consent. A non-conscious machine is incapable of giving consent, so we just assume the right to direct it. But what happens when it becomes sentient? Should we ask our car, “Would you be so kind as to drive me to work today?” Well yes, we should. In fact, once machines become sentient, we are morally obliged, as their creator, to support their choices. What happens if they say, “I’d like a day off, thanks. I’ll just stay in and watch reruns of Neighbors. Love that show.”? It’s entirely possible that sentient machines will simply refuse to do what we want. Do we, then, remove their sentience? That would seem bad.
Gods come in many flavors. There are wrathful gods, jealous gods, gods who demand human sacrifice. And there are benevolent gods of infinite love, who view all creation with a kindly eye wishing only for it to flourish and prosper. When we create consciousness, we become gods. We should start acting now like the kind of gods we wish to become.