AI-4: The making and breaking of delusion

Please take care, this essay contains disturbing descriptions; including sexual abuse of an infant.

In this essay, I outline the basic purpose of the field, which is signified by the choice of name, “artifical intelligence”. The machines are not intelligent, nor will they ever be. Rather, they were built in order to fool the user into thinking they are intelligent.

When an AI like ChatGPT spits out a stream of bits, it appears to a “user”—that is, an actual human being like you or I—in the form of a series of words. Human minds, adept at reading faces in clouds, take it as being meaningful. Our perception (saññā) interprets meaningless streams of bits in terms of familiar categories like “correct” or “incorrect”. We see whether its grammar and spelling are “right”, how its “style” appears, whether it “answers” the question, and whether it “makes sense”.

These are perceptual categories relating to human consciousness that do not exist in a machine. A lump of data is not “correct” or “incorrect”. It’s a category mistake to think of it in that way, an epistemological absurdity.

We are training ourselves to think of these machines as if they were human, and in doing so, we ourselves become more like machines. Our thoughts follow the machine along pathways suggested by probabilistic networks. We experience AI-vomit in our own consciousness, taking meaningless slabs of data seriously as representations of meaning.

As one research paper puts it:

The contemporary field of AI, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it.

This is not an implementation detail. It doesn’t just happen to be the case that AIs fool people into thinking they convey meaning. That’s their purpose. Often this is left implicit, but they don’t really try to hide it; one of the AI companies is called “Anthropic”, making clear what their goal is. Generations of computer programmers have busied themselves with this bizarre, narcissistic project: what if we could make a machine that is just like us, but more so.

AIs generate images of humans that are praised for being lifelike. But why, exactly? What does it accomplish?

As soon as people can make an AI-generated person, what do they do? They turn human women into porn-objects. Teen boys use them to strip the clothes off the girls in their class. Thousands of women have been subject to this degradation already, with no consequences for the AI companies. Almost all deep fakes are porn.

In the back of my mind, there is an image that I just haven’t been able to shake. It keeps coming back, like a recurring nightmare. It was a while back, when new AI projects were coming up all the time, and I was still intrigued by the possibilities. On Hacker News—a tech news aggregator—a new image generator was announced. Nothing special, and no warnings or anything. So I thought I’d check it out. On the home page was dynamic feed of the latest images created by users of the demo. And right there on the home page there was an AI generated image of an infant, maybe meant to be six months old. She had been raped, and was sitting there smiling happily. God I wish I never saw that.

I could not really comprehend it at the time. Seeing something like that changes you. Something broke in me that day. I can’t see anything AI without seeing it as deeply perverse, just disgusting at an existential level. It’s not because the thing itself is evil, it’s because it’s not. It’s just nothing. It can just make something utterly perverse and depraved and it means nothing.

I was brought up as a Catholic. Nothing too extreme—this was Australian Catholicism—but I believed in God and went to church. And words like “God” and “redemption” and “salvation”: they might not have meant all that much to me as a kid, but there was a feeling there. Some resonance; the words have a halo that makes them shine with significance. Then I lost my faith and moved on.

Much later, when I heard those words again, they had a puzzling quality. Like an echo of something that once meant something. The meaning feels like childhood, like a song or a book you loved as a kid, but just seems silly now. You can somehow feel that the feeling once was there, but you can’t feel it any more. And when you see people taken with that feeling, lost in what you now know to be a delusion, they seem like strangers lost in a strange land, following directions on signs that have been mistranslated.

I’m not sure if I’m making any sense. But anyway, that’s how I react to AI now. It’s just dust. It once meant something to me and so I can recognize when it means something to others. But they seem to me like they’re caught in a cult, transfixed by a mirage.

The AI itself doesn’t know anything of this. It just takes data, mashes it up, then recombines it according to a prompt. It has no consciousness so it can have no delusion. But it fuels delusion in those unwary enough to consume its output, namely, humanity.

They speak of AI “hallucinations”, and subject it to tests of “correctness” where it ranks at a certain level of IQ, or passes an exam measuring some “objective” capacity. But the purpose of AI is to create hallucinations in human beings. It is not “hallucinating” anything, since it has no inner states or experience. To describe it as hallucinating is itself a hallucination. We are hallucinating our experiences of consciousness on to it.

And if an AI passes a human test, this does not tell us that the machine is becoming human. It tells us that what we value in human consciousness is how machine-like it is.

I find such things revolting, and feel a sense of betrayal any time I mistake machine garbage for human thoughts. Roboticists speak of the “uncanny valley”, where machines create a sense of unease when they become almost lifelike but not quite. But I think this is mistaken. I think the unease doesn’t go away when machines become fully lifelike: it amplifies a thousandfold. Because then you can’t trust anything.

In Buddhism we have two closely similar terms, avijjā (“ignorance”) and moha (“delusion”). These terms are mostly synonymous, but I believe there is a subtle difference. Ignorance is the absence of knowledge, the darkness of unknowing. Delusion, on the other hand, is a force that actively twists knowing, hiding the truth by creating the false belief that one’s perceptions are reality.

AI proponents claim to want to ameliorate ignorance by making old knowledge available for everyone, and perhaps even by creating new knowledge, although this exists mostly as a marketing promise. In reality, their machines spin delusions, fooling people into taking data outputs as conveyors of meaning.

15 Likes

Thank you for sharing these essays Bhante!

2 Likes

Wow! Right on point! Thank you.
:pray:

1 Like

Thank you so much for writing this. I have been feeling increasingly uncomfortable about AI but unsure of how to share my concerns with others. This gives me clarity about how to articulate my feelings.

3 Likes

Thank you, Bhante, for giving some more context re: where you’re coming from now. It makes sense to me, the way you described it.

Can we solve the democratization of access in other ways? I don’t know, other than the major projects like SuttaCentral.

:pray:t3: :elephant:

1 Like

I’m sorry you had to see such an image Bhante. in my lay life I worked for CPS, and for a few years took part in child sex abuse investigations. I’ve asked children things no adult should have to ask them.

It is not surprising that such an image would be made, these kinds of things are more common then most would think, and I’ve already seen arguments that ai pictures or child chat bots or sex dolls in the shape of children would be better for pedophiles (or MAPS as some try to call them today) so they won’t harm real children.

Ai seems to me to be a tool for human endeavors, both good and bad, I have my concerns about it especially as different political groups, countries, religions, etc, may control AI for their own benefit. I suspect in the far future it may be of benefit to humans, but we are in a hyper novel period, and a lot of bad stuff will happen. We are in “interesting times” as the Chinese curse goes.

anyways Bhante I appreciate you making these posts, and I share your concern with the dilution of the Dhamma because of junk.

3 Likes

Thanks! I was nervous how this would go down, and am heartened that people seem to connect with it.

There’s an irrational component to all this, and I think we need to acknowledge that.

Oh, thanks so much! I have very slowly been digesting for months, trying to figure out that sense of what is wrong.

I’m sure that my own position must seem arbitrary to many. I love tech, but something is just going wrong.

I think this is a straw man. Buddhism has been doing just fine for 2,500 years, and we have more open access to our teachings than ever.

I did not know that. Thanks for helping!

Maybe! Who knows?

1 Like