There’s no way to test for the presence of consciousness. The problem is, you don’t need to be conscious to end the world. All you need is to push a button. I mean, a shorted switch nearly destroyed the world that one time. While I believe machines will never be “intelligent”, I also believe that humans have an almost irresistible tendency to impute awareness to inanimate objects. So the belief in intelligent machines is certainly dangerous.
People will fall in love with a bot. They will marry them. Fans will kill to protect their AI singer. And they’ll kill themselves when the bot is taken offline. People are already contemplating suicide just thinking about AI.
I don’t believe the more extreme forms of AI fears; I don’t think it will annihilate humanity, and I can’t even really imagine how it might do such a thing. But it can certainly be very dangerous. As AI is stuffed more deeply into more roles, it will have the potential to start wars, or commit war crimes, or create disasters of other forms. It won’t care, it’s just a machine.
Alan Turing proposed that for practical purposes we should should assume that if a computer can fool us into thinking it is human, we should regard it as thinking. (Turing was more subtle than that, but that’s the basic idea.) It’s a behaviorist model: a mind is what acts like a mind.
In practice, there will be no single, clean, rational decision. Everyone will make up their own minds. Laws will define it. Cults will worship it. Fools will believe it. Psychopaths will use it. Madmen will war over it. And we’ll all have to live with it.
The more AI is pushed, the more it becomes an inescapable layer in everything we do. When we contact Social Security for our payment, when we want to speak to a doctor, when we want to adjust an image on our computer, when we want to get an education, we will increasingly have no choice. And when using it, like it or not, it becomes normalized. The boundary between self and machine will erode.
For kids growing up in that world, the theoretical difference between a machine and a sentient being of inherent worth will fade. They’ll be using social media where AI is embedded in every interaction. You know how, a few years ago, Gmail introduced “suggested replies”? Like that, but twisting and shaping our words, our images, our videos and memes, participating in conversations. They’ll guess what religion you follow and shape the discourse accordingly. And we’ll soon accept this as normal, just as we accepted facebook telling us who might be our friends. (Well, I didn’t, I thought it was creepy and noped out.)
No-one understands why AI produces the output that it does, and no-one can predict it. Sam Altman said, “That’s the unsettling thing about neural networks—you have no idea what they’re doing, and they can’t tell you.” Yet it is constantly being pushed into new domains with reckless speed. We cannot predict or control what it will do.
When you and I critically engage with these ideas, as, one assumes, reasonably intelligent and educated persons, we naturally tend to assume that the experience of others is similar. We can reflect on AI and make meaningful choices in how we use it. But this capacity is by no means universal.
Here are some people who may have little capacity to reflect meaningfully on AI and its ethical impacts.
- People with no experience of computers. (Maybe one third of the world’s population.)
- People with very limited use of computers. (For many people, all they know is facebook.)
- People who are very young.
- People who are very old.
- People living with intellectual disabilities.
- People living with chronic paranoia.
- People living with schizoid or other delusions.
- People with no education.
- People who are very sick.
- People who are very stoned.
- People who are horny teens.
- People who are lazy and distracted.
- People who hate all that tech stuff.
- People who come home exhausted after a day’s work.
- People scrolling lazily in bed after a couple of drinks and a xanax before they fall asleep.
- People whose job depends on them not asking these kinds of questions.
- People exhausted from caring for children or others.
- People who speak only an endangered language.
- People sitting in Parliament or Congress with very limited understanding of technology and a long roster of tech lobbyists with deep pockets.
That’s a lot of people. No-one at OpenAI or any of these companies is really thinking of them. Of course they will tell you that AI will help empower and enable everyone. They are lying. They care about themselves and people like them.
It’s not really the case that no-one is thinking of the people who don’t understand AI. Scammers are. The Washington Post reports that “Scammers are using artificial intelligence to sound more like family members in distress. People are falling for it and losing thousands of dollars.” AI plus vulnerable people is a perfect recipe for scams.
However, the reality is that none of us are secure, no matter how educated and alert. We simply can’t tell. What then are we to do?
I don’t think the Turing test, or anything like it, is vaguely adequate to establish the presence of consciousness. In fact, I would argue that current AI disproves this once and for all. They are clearly not conscious, yet are quite capable of fooling a large portion of humanity a large portion of the time.
But I think the Turing test is still useful. AI devotees want to create things that seem conscious, because they believe that these are steps to creating something that is actually conscious. I think they’re delusional, but I may be wrong; these are untrodden fields. We should absolutely take them seriously. The fundamental idea is not that this particular technological course will create consciousness, but that something will, and building AI is how we learn.
Imagine a terrorist group that openly declared that it wanted to acquire nuclear weapons. To that end, they start stockpiling dynamite. “When we get enough dynamite,” they say, “we’ll make a nuclear bomb.” “Hang on,” you think, “dynamite is completely different from nuclear weapons. You can’t make an atomic bomb out of dynamite, no matter how much you have.” So you dismiss them as cranks and move on.
That would be a mistake. For a start, the very existence of a group that wants to wield nuclear weapons is inherently disturbing. And if a group that mad has access to piles of dynamite, that’s really disturbing. You can do hella damage with dynamite.
But it’s more than that, because you can, in fact, get nuclear weapons from dynamite. One pathway: you lend your terrorist services to a nuclear nation. Use your dynamite to blow up some train stations, level some concert halls. As you do so, you become a trusted ally, then a confidant, and ultimately a leader. Now their nukes are your nukes. This kind of thing happens all the time, it’s how politics works. And that’s why we take as a serious threat the existence of a group that wants nuclear weapons, no matter how implausible we feel their aims are.
Equally, when people say they want to create AGI, and are taking serious steps towards it, the steps themselves are dangerous enough. But the real problem is the existence of vastly wealthy and powerful people obsessed with the belief that they are creating consciousness. When they say their work is making steps towards AGI—and they say this all the time—then we need to take them seriously.
And we need to stop them. We’ve seen above some criteria we can use in judging whether to ban AI.
- Do its makers claim it is a step towards AGI?
- Does it fool people into thinking its human?
- Do its makers understand it?
There’s another crucial criterion. AI is driven by hype, and that hype is sold by the men in charge, which begs the question: what kind of men are these? And can we trust them?