AI-10: The nihilistic craving for x-risk

Existential risk is sexy. Like Oppenheimer, tech gurus can become Death, destroyer of worlds. The AI world is full of people who on the one hand tell us that their chatbot is the greatest thing ever invented, key to the mysteries of consciousness itself, and in the next breath warn us that it may well kill every human being alive.

Sam Altman has some interesting things to say about the threat AI poses to the very survival of humanity.

the bad case — and I think this is important to say — is, like, lights out for all of us.

As for the next sentence, each time I read it I am newly amazed. It’s possibly one of the most incredible things a human has ever said.

I think AI will… most likely sort of lead to the end of the world, but in the meantime there will be great companies created with serious machine learning

That a person could even put these ideas together and let them pass their mouth is incomprehensible to me. But anyway, he is clear, AI is not just a threat, it is the threat.

development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.

In a formal statement on the OpenAI website, he says:

Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.

His views are normal in the AI community.

Dario Amodei, CEO of Anthropic: “I think at the extreme end is the … fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.”

Elon Musk, founder and CEO of xAI: “one of the biggest risks to the future of civilization is AI”.

Sundar Pichai, the brahmin CEO of Google: advanced AI “can be very harmful if deployed wrongly,” and that with respect to safety issues, “we don’t have all the answers there yet, and the technology is moving fast. … So does that keep me up at night? Absolutely.”

Shane Legg, co-founder of DeepMInd: “We have no idea how to solve this problem.”

Since everything that is happening today is unprecedented, there is definitely a chance that AI will destroy humanity. But I believe this is the first time the developers of a technology have flirted so openly with nihilistic desire. They sell us anxiety and fear because they know we’re addicted.

But I think it’s more than that. It’s not just a cynical marketing ploy, nor is it rational. It’s nihilism. I think they want humanity to die. I think there’s a deep level hatred of themselves, manifesting as hatred of humanity, that at some unconscious level is fueling this inchoate need to propel ourselves to a future where we no longer cause any problems because we don’t exist.

In Buddhism we call this vibhavataṇhā, the craving for annihilation. It manifests as addictive and self-harming behavior, and in extreme forms as suicide. But these guys are coyly flirting with suicide of the whole human race. Look, if that’s their thing, fine. People are messed up. All I’m saying is that sane people need to stop them.

It’s impossible to guess the actual risk and foolish to try. I personally agree with Émile P. Torres that the risk of extinction is overrated. Its difficult to imagine a genuine scenario leading to extinction. That, however, doesn’t change the fact that, as the OpenAI website says, “a misaligned superintelligent AGI could cause grievous harm to the world”.

The real risk is the actual things that are destroying the world, primarily climate change. AI distracts us from the real problems, offers no genuine solutions, and in doing so consumes a vast amount of energy, both physical and mental.

An OpenAI employee tweeted that since everything is accelerating, just relax and worry about “stupid mortal things” like spending time with your families while AGI becomes a reality: “I don’t feel any control everyone else certainly shouldn’t”.

Has any product ever been sold with such reckless nihilism? Even with nuclear weapons, they would tell us they could destroy the world, but at the same time reassure us that they would try their best not to use them. How is it vaguely legal to advertise your product as a world-destroyer and then just put it into everyone’s computers?

7 Likes

From the Torres article (thanks for this super-helpful reference):

Far from being “utopia,” the grandiose fantasies at the heart of TESCREALism look more like a dystopian nightmare, and for many groups of people it might even entail their elimination. Almost every imagined utopia leaves some people out — that’s the nature of utopia — and so far as I can tell, if the TESCREAL vision were fully realized, most of humanity would lose (to say nothing of the biosphere more generally).

This frames the likely outcome. Almost everyone else is left out except for … well, we know who. For the last few months I’ve imagined thermonuclear disaster as the existentialist outcome. But even there, I suppose it’s not really going to make 100% of humanity – and other life forms – extinct.

But I think it’s more than that. It’s not just a cynical marketing ploy, nor is it rational. It’s nihilism. I think they want humanity to die. I think there’s a deep level hatred of themselves, manifesting as hatred of humanity, that at some unconscious level is fueling this inchoate need to propel ourselves to a future where we no longer cause any problems because we don’t exist.

In Buddhism we call this vibhavataṇhā, the craving for annihilation. It manifests as addictive and self-harming behavior, and in extreme forms as suicide. But these guys are coyly flirting with suicide of the whole human race. Look, if that’s their thing, fine. People are messed up. All I’m saying is that sane people need to stop them.

This is a new way for me to reflect on the 100% existentialist spiel. :thinking: I wouldn’t have been able to see this dhamma on my own.

:elephant: :pray:t2:

1 Like

Gotta reflect on that. Can’t really say that people in AI in general, including me, aren’t all somewhat depressed…

My reading of this is that roon firmly believes that AGI is just a natural consequence of compute scaling laws, just like how Moore’s law is not driven by anybody or any single company.

They’ve been such a super insightful commenter on this space, at least partly because they used to be on the other side.

Not to dig down into painful feelings, but I really wonder about this. Like, I don’t personally know lots of people on the inside, but Torres definitely thinks there’s a drive towards nihilism. Once you notice it you see it all the time.

Right.

Let’s not forget the accusations against Buddhism being nihilistic as well.

I don’t know how things are among the employees of these companies, but as I have the experience of moving between Buddhist Studies and computer science departments, when it comes to the research side of things I’d be very careful with statements like ‘group X tends more towards depression or nihilism’.
I guess on a statistical level, depression is (at least in North America) more common for humanities folks, and the reasons are more than obvious. I am not surprised that I encounter a healthy and positive, energetic work atmosphere where I am, thats true for both sides, but there is a lot of selective bias at play here.

1 Like

Follow the money. Money and power go hand in hand. There’s a lot of different kinds of power. Military; labour; distribution.

All these aspects of our society have been sold on AI.

For example:

“Create robots, give them AGI and you’ve released an entire sector of society from the bonds of physical labour.”

“Put weapons in their hands; give them the keys to the nukes; and you’ve eliminated human casualties from war.”

“Automate our distribution system, and you’ve now got the capacity to feed the world.”

We could easily set down the issue of AGI for 15 years and just work on automating robots and distribution systems with our type of current artificial systems.

I think there’s a weird dichotomy between the fact that we somehow desire robot “slave labour” and military capacity, but we also want to give it the ability to think for itself and understand its own oppression.

Heck, everyone knows you don’t allow an oppressed class of anyone the ability to think about its oppression! And you certainly don’t give them guns. You just work them till they die and confuse and distract them with sense pleasures until they reach 40 and are too old to march into battle.

Eventually some group is gunna launch a free the robot slave class movement. Whether we automate it in 15 years with “stupid” AI or not, the creation of a robot slave class is going to cause problems.

AGI will eventually come around and we’ll be so dependent on it that it will go “Matrix” style. Or, we get super advanced robotics and no one has any reason to do anything. Or, the power hungry will sell us on free robot labour and just devise other ways to oppress us.

Complicated issue.