AI-6: How to build an atomic bomb from dynamite

There’s no way to test for the presence of consciousness. The problem is, you don’t need to be conscious to end the world. All you need is to push a button. I mean, a shorted switch nearly destroyed the world that one time. While I believe machines will never be “intelligent”, I also believe that humans have an almost irresistible tendency to impute awareness to inanimate objects. So the belief in intelligent machines is certainly dangerous.

People will fall in love with a bot. They will marry them. Fans will kill to protect their AI singer. And they’ll kill themselves when the bot is taken offline. People are already contemplating suicide just thinking about AI.

I don’t believe the more extreme forms of AI fears; I don’t think it will annihilate humanity, and I can’t even really imagine how it might do such a thing. But it can certainly be very dangerous. As AI is stuffed more deeply into more roles, it will have the potential to start wars, or commit war crimes, or create disasters of other forms. It won’t care, it’s just a machine.

Alan Turing proposed that for practical purposes we should should assume that if a computer can fool us into thinking it is human, we should regard it as thinking. (Turing was more subtle than that, but that’s the basic idea.) It’s a behaviorist model: a mind is what acts like a mind.

In practice, there will be no single, clean, rational decision. Everyone will make up their own minds. Laws will define it. Cults will worship it. Fools will believe it. Psychopaths will use it. Madmen will war over it. And we’ll all have to live with it.

The more AI is pushed, the more it becomes an inescapable layer in everything we do. When we contact Social Security for our payment, when we want to speak to a doctor, when we want to adjust an image on our computer, when we want to get an education, we will increasingly have no choice. And when using it, like it or not, it becomes normalized. The boundary between self and machine will erode.

For kids growing up in that world, the theoretical difference between a machine and a sentient being of inherent worth will fade. They’ll be using social media where AI is embedded in every interaction. You know how, a few years ago, Gmail introduced “suggested replies”? Like that, but twisting and shaping our words, our images, our videos and memes, participating in conversations. They’ll guess what religion you follow and shape the discourse accordingly. And we’ll soon accept this as normal, just as we accepted facebook telling us who might be our friends. (Well, I didn’t, I thought it was creepy and noped out.)

No-one understands why AI produces the output that it does, and no-one can predict it. Sam Altman said, “That’s the unsettling thing about neural networks—you have no idea what they’re doing, and they can’t tell you.” Yet it is constantly being pushed into new domains with reckless speed. We cannot predict or control what it will do.

When you and I critically engage with these ideas, as, one assumes, reasonably intelligent and educated persons, we naturally tend to assume that the experience of others is similar. We can reflect on AI and make meaningful choices in how we use it. But this capacity is by no means universal.

Here are some people who may have little capacity to reflect meaningfully on AI and its ethical impacts.

  • People with no experience of computers. (Maybe one third of the world’s population.)
  • People with very limited use of computers. (For many people, all they know is facebook.)
  • People who are very young.
  • People who are very old.
  • People living with intellectual disabilities.
  • People living with chronic paranoia.
  • People living with schizoid or other delusions.
  • People with no education.
  • People who are very sick.
  • People who are very stoned.
  • People who are horny teens.
  • People who are lazy and distracted.
  • People who hate all that tech stuff.
  • People who come home exhausted after a day’s work.
  • People scrolling lazily in bed after a couple of drinks and a xanax before they fall asleep.
  • People whose job depends on them not asking these kinds of questions.
  • People exhausted from caring for children or others.
  • People who speak only an endangered language.
  • People sitting in Parliament or Congress with very limited understanding of technology and a long roster of tech lobbyists with deep pockets.

That’s a lot of people. No-one at OpenAI or any of these companies is really thinking of them. Of course they will tell you that AI will help empower and enable everyone. They are lying. They care about themselves and people like them.

It’s not really the case that no-one is thinking of the people who don’t understand AI. Scammers are. The Washington Post reports that “Scammers are using artificial intelligence to sound more like family members in distress. People are falling for it and losing thousands of dollars.” AI plus vulnerable people is a perfect recipe for scams.

However, the reality is that none of us are secure, no matter how educated and alert. We simply can’t tell. What then are we to do?

I don’t think the Turing test, or anything like it, is vaguely adequate to establish the presence of consciousness. In fact, I would argue that current AI disproves this once and for all. They are clearly not conscious, yet are quite capable of fooling a large portion of humanity a large portion of the time.

But I think the Turing test is still useful. AI devotees want to create things that seem conscious, because they believe that these are steps to creating something that is actually conscious. I think they’re delusional, but I may be wrong; these are untrodden fields. We should absolutely take them seriously. The fundamental idea is not that this particular technological course will create consciousness, but that something will, and building AI is how we learn.

Imagine a terrorist group that openly declared that it wanted to acquire nuclear weapons. To that end, they start stockpiling dynamite. “When we get enough dynamite,” they say, “we’ll make a nuclear bomb.” “Hang on,” you think, “dynamite is completely different from nuclear weapons. You can’t make an atomic bomb out of dynamite, no matter how much you have.” So you dismiss them as cranks and move on.

That would be a mistake. For a start, the very existence of a group that wants to wield nuclear weapons is inherently disturbing. And if a group that mad has access to piles of dynamite, that’s really disturbing. You can do hella damage with dynamite.

But it’s more than that, because you can, in fact, get nuclear weapons from dynamite. One pathway: you lend your terrorist services to a nuclear nation. Use your dynamite to blow up some train stations, level some concert halls. As you do so, you become a trusted ally, then a confidant, and ultimately a leader. Now their nukes are your nukes. This kind of thing happens all the time, it’s how politics works. And that’s why we take as a serious threat the existence of a group that wants nuclear weapons, no matter how implausible we feel their aims are.

Equally, when people say they want to create AGI, and are taking serious steps towards it, the steps themselves are dangerous enough. But the real problem is the existence of vastly wealthy and powerful people obsessed with the belief that they are creating consciousness. When they say their work is making steps towards AGI—and they say this all the time—then we need to take them seriously.

And we need to stop them. We’ve seen above some criteria we can use in judging whether to ban AI.

  • Do its makers claim it is a step towards AGI?
  • Does it fool people into thinking its human?
  • Do its makers understand it?

There’s another crucial criterion. AI is driven by hype, and that hype is sold by the men in charge, which begs the question: what kind of men are these? And can we trust them?


Would this lead to the understanding of dukkha and the uprooting of dukkha? Is this the way? Are they the cause of dukkha? Would this save the world from inevitable destruction, from impermanence?

Does this mean that there is also no way to test the absence of consciousness or its cessation?

Yes, people with Alzheimer also loose the ability to even know the difference between a real living cat and robo-cat but they can enjoy caring for the robocat as if it is a real cat. One woman approached me and asked me…“is this cat still alive, it is so still”…the batteries had to be renewed. In a way it is also endearing sometimes, It is still nice to see she cares for the cat, want to please it. Some have also dolls who they see as real children.

Some people see the environment such a mountain, the land, a forest also as living, as animate, and that seems to be more wholesome then seeing it in a materialistisch way as only molecules or a source to earn a lot of money.

I do not really know if netto the world becomes a better place when we start to see all and everything as soulless, as mere impersonal processes or things.

I feel, in general the lie of technology is that when our lives become more comfortable and easy due to technology that is wholesome and good.

How do you ban it in countries like China, Russia, North Korea, … Sadly, like nuclear proliferation, I don’t think it can be stopped. The cat is out of the bag.

1 Like

You should look to edit this AI-n series into a single article, something that can be shared and syndicated, when complete. What a fascinating and thoughtful understanding of, and elucidation of the problems with, so-called artificial intelligence.

1 Like

This is an important list. I estimate that, for people who “hang out” with Discuss & Discover regularly, about 75% of the list represents the people who are in their mind’s eye when reflecting altruistically on AI’s potential.

We’re talking about something as crucial as radical containment of nuclear material; the immediate, full-scale remediation of global warming; and the immediate, full-scale remediation of population growth. Downstream, those who are most vulnerable or historically oppressed are in the crosshairs of these momentous decisions.

Crucial discussion hurts. I’ve never really found a way around that in these 60 years. (Granted, I didn’t know how to have those until I had some language skills.)

So, eventually – for me, personal context comes into view. I’m reminded of the Buddha’s illustration in AN 5.162 of how we rid ourselves of resentment toward someone else. The caring of someone else’s body and basic well-being is personal at some level. At the end, we really are caring for each other’s bodies. Just ask my 91-year-old parents or my 64-year-old profoundly disabled sister (who wouldn’t be able to put it in those terms).


For a start, you support the initiatives to legislate big tech in your own country.

Thanks so much for your support! In fact it began as a big essay which has grown over the past few months, and I am breaking it up into chunks, mostly so I can get it finished and move on!

Indeed, human consciousness is many and varied, and we tend to assume that we and other people will be at their best.

1 Like