AI-3: There is no road from here to there

Note: I have reworded this essay following feedback in the comments. Thanks to all commentators!

In the last couple of essays, I have considered the use of AI as applied to Buddhist texts. This is, of course, an extremely small subset of a much larger industry. From now I will be taking a look at the larger picture of the AI industry as a whole, its philosophies, its people, its purpose and impact.

One might agree with the problematic nature of the industry as a whole, yet accept that specialized applications are justifiable. We can do our little bit over here, and it’s not really got anything to do with what the big guys are doing. Clearly there is something to this, and there is a moral distinction in the purpose for which the work is done. But the boundaries are never as clear as we would like. The same big companies whose AI is powering military offensives are sponsoring AI at big universities to do the fundamental research. You can work on your small AI project, but the big companies really want to swamp you.

This is no accident. It’s a corollary of the fundamental nature of the field: the effectiveness of AI depends on scale. The more it goes on, the bigger it gets, the harder it is to do any human-scaled work. And that’s why it is becoming increasingly monopolized by the same few corporations. In this way, it is similar to crypto, which likewise started out with small scale basement mining operations, and when it hit mainstream, became a planetary system of industrial server farms.

But let us turn to the idea underlying the very idea of AI: they want to build a machine mind. Unfortunately, the AI field has no clear conception of consciousness. They are, by and large, materialists, who believe that consciousness emerges from brain activity. And they think that if they make a machine that is brain-like enough, it will become conscious. Buddhism, of course, is not a form of materialism, and we reject the notion that consciousness arises from brain activity. This is a philosophical notion that is not determined by science. Buddhist philosophy, in any form, therefore must reject the fundamental premise of the entire AI project.

Alan Turing spoke of whether machines can “think”. These days people speak of machines “reasoning” or using “intelligence”. There are no clear definitions of these terms, and the persistence of, on the one hand, being completely unable to say what it is that they are trying to achieve, and on the other hand, devoting trillions of dollars to achieving it, is symptomatic of the delusional thinking of the AI field.

Just today, the Guardian reports that Elon Musk is claiming:

My guess is that we’ll have AI that is smarter than any one human probably around the end of next year

By not defining “smarter”, he elides the delusion of this kind of thinking. Humans are aware, machines are not. We all know that machines can do some things better than humans. My Texas Instrument calculator in the 70s could do maths better than me. But none of these things that machines do has anything to do with consciousness. Musk goes on to say:

If I could press pause on AI or really advanced AI digital superintelligence I would. It doesn’t seem like that is realistic so xAI is essentially going to build an AI. In a good way, sort of hopefully.

Has any other industry ever developed by telling people it’s a bad idea, but we will try not make it good, “sort of hopefully”?

The Buddha had something to say about this kind of thinking.

“Suppose, Poṭṭhapāda, a man were to say: ‘Whoever the finest lady in the land is, it is her that I want, her that I desire!’ They’d say to him, ‘Mister, that finest lady in the land who you desire—do you know whether she’s an aristocrat, a brahmin, a peasant, or a menial?’ Asked this, he’d say, ‘No.’ They’d say to him, ‘Mister, that finest lady in the land who you desire—do you know her name or clan? Whether she’s tall or short or medium? Whether her skin is black, brown, or tawny? What village, town, or city she comes from?’ Asked this, he’d say, ‘No.’ They’d say to him, ‘Mister, do you desire someone who you’ve never even known or seen?’ Asked this, he’d say, ‘Yes.’ What do you think, Poṭṭhapāda? This being so, doesn’t that man’s statement turn out to have no demonstrable basis?” (DN 9 Poṭṭhapādasutta)

Early Buddhism, on the other hand, does have a clear conception of the mind. And it is one that, if it is correct, completely rules out the possibility of consciousness by the pathway imagined by the AI devotees. This conception is not established by the pseudo-scientific process of scanning brains and postulating mental correlates. It arises from the inner reflection that is deepened through meditation, and guided by knowledge of the Buddha’s teachings. This process reveals layers and nuances of the mind, allowing the meditator to develop an understanding that is not just theoretical, but practical and effective. Like any true understanding, it works: when you understand the mind, you can let go.

Let us briefly explore this process. From an early Buddhist point of view, the working of the mind is understood in relation to subjective awareness or consciousness (viññāṇa). This consciousness arises dependent on sense stimulation, with the mind itself as the sixth sense that is aware of thoughts, ideas, and memories.

As viññāṇa is the consciousness of other phenomena, it lacks the qualities of those things but it is affected by them. That is to say, consciousness in and of itself is not red or white, not sweet or sour, not angry or greedy or wise. However, as the subjective awareness of these things, it reflects their qualities, as a mirror reflects the color red without actually being red.

It does not lack its own qualities, of course. Its nature as a conditioned phenomenon is to be aware, and it may be more or less aware, bright and clear, or dull and cloudy. In this way it is unlike a mirror, which reflects light without being changed by it. Viññāṇa is changing all the time.

We witness a major transformation of consciousness every time we fall asleep: awareness dims to almost-darkness, the dream realm from which logic and reason have fled. Or even deeper, full sleep where the only betrayal of consciousness happens if we are awoken. And then we see the climb of awareness back into the daylight as we regain our faculty of knowing, being able to discern the things around us. And as consciousness returns, so to do all of its concommitants. And we think and desire and wonder and begin another day.

When the mind is filled with greed, that affects the manner in which our consciousness knows. It is hard to describe such things, one reason for that being that it is always different as desires are always different. The horniness of a teenage boy is not the same as the rapaciousness of a CEO. Yet they share a similar quality of limiting and narrowing consciousness, yet energizing it one specific direction.

In dependent origination, the primary conditioning factor for consciousness is saṅkhārā, which here refers to moral choices. When we choose to do good, it focuses the mind in a certain direction, shaping consciousness. When we choose to do evil, it creates another kind of mind with different experiences. If we get into the habit of choosing evil, our minds take on those evil qualities, becoming depraved and degenerate, devoid of compassion and wisdom. If we do good, we create a mind of openness and clarity. This is the fundamental principle that lies behind all Buddhist meditation. If we want to create something that is like a mind, then, morality cannot be an afterthought: it must be the defining characteristic. It is how you create, not just a mind, but a healthy mind. There are many more ways to create brokenness than health.

Consciousness is an organic sense of knowing. It does not exist by itself—there is no such thing as pure consciousness separate from other dimensions of the mind. Rather, consciousness is the subjective function of the mind, created and supported in conjunction with aspects such as feeling, perception, intention, and attention. These factors cannot be separated, and they always proceed in an interdependent stream. Consciousness is present from birth, and grows and evolves during life, in response to experiences and to fulfill desires. More than that even, it is a stream that flows from one life to the next, providing continuity in that most drastic of changes, death.

Consciousness is the most subtle, hard to grasp, yet universally pervasive of all conditioned things. It is there when you taste salt, when you sleep, when you die. As meditation grows deeper, consciousness starts to reveal itself as an echo or a reflection, or better, as a glimpse of movement in a reflection. It has a surpassing softness, a tenderness and reactivity that it normally hides from us. Consciousness wants to know, not to be known. Yet there is something about it that, deep down, yearns to be understood.

It is the last bastion of the Self, the resort we cling to when we have seen through all other tricks and deceits. But it too is empty, like a magician’s illusion.

Once we understand consciousness in this way, it is clearly impossible to get from “thought” to “consciousness”. You can’t start by putting together bits and pieces and then consciousness pops out. Thought or imagination or reasoning cannot exist without consciousness. They are all there from the start. But it’s worse than that, because a machine does not have “thought” or “memory”, it has something else that happens to go by the same name.

When we speak of a machine having “memory”, it doesn’t have the same faculty that a mind does. Rather, it does something completely different that partly resembles the functions that we associate with memory, namely, the recall of past events. One of the betraying factors here is that machine memory is in some ways better than human memory: a machine can replicate something exactly, whereas real consciousness relies on fuzzy recreations. This shows that machine memory is quite a different kind of thing than human memory.

We use the same word for convenience, but that fools us into thinking that they are the same thing. This is the same kind of logical fallacy that Buddhists have long understood in the context of the “self”. We understand that there is no “self” in the sense of a metaphysical, lasting entity that is who we are in our utmost essence. Yet we use the word in everyday conversation just like anyone else. But with mindfulness, we remain clear-headed about what it is that we’re actually referring to. If we don’t, our deeply-held tendency towards egoism (ahaṁkāra) quickly leads us into blind attachment to our metaphysical fancies.

The same applies to, say, the exercise of logic. When you or I do a maths sum, there is some kind of conscious process that goes on. A machine can do the same sum, using a completely different kind of process. Yet we functionally describe them both in the same way as “solving” the problem. Again, the difference is betrayed because a simple calculator can already do sums faster and more accurately than we can, yet it is clearly not conscious.

Machines don’t remember, they simulate memory. They don’t recognize things, they simulate recognition. They don’t think, they simulate thinking. Externally these processes appear similar to a degree, as there are functional overlaps. But for the machine, all there is is the outside. There is nothing inside, nothing from the machine’s point of view, nothing that it is like to be a machine. You will never get from simulated thought, simulated memory, simulated feeling to actual consciousness. At best, you’ll create something that is better at fooling more people.

The history of Indian philosophy is intertwined with the emergence of contemplative sages such as the Buddha. When the great and wise pointed to the centrality of consciousness as the key to liberation, people listened, they discussed, and they formulated philosophies. Their attention was magnetized by the power of insight, and they inclined their own attention and mental development down that same path.

AI is doing the same thing, except we are bewitched by a simulacrum. Human attention worldwide has been transfixed, our thoughts and fears and hopes and fantasies magnetized by the appearance of something so alien that seems somehow like us. We want it to be real. We want to be it. In doing so we long for the erasure of our own subjectivity.

AIs are not conscious and do not understand or feel anything. They were not conceived in lust or raised in love or tormented by hate. They just spit out streams of data.

It is a blind faith of the AI salesmen that this “limitation” will be removed in time. They say a singularity is approaching, when a machine is smarter than a man. They see this as necessary and desirable. Elon Musk said, “We will all be dumber than a house cat compared to AI, if that’s any consolation.” Bear in mind that in the same tweet he endorsed the racist and eugenicist pseudo-science of phrenology, which I guess at least proves that AI is already smarter than some people.

If we can make a machine smarter than us, surely it can make another machine smarter than itself, leading to an exponential spiral of intelligence limited only by energy, materials, and the speed of light. Then it’s on to quantum computers, for which the speed of light is more a guideline than a rule.

None of this will happen. It’s sheer fantasy, with zero evidential basis. What AIs do has nothing to do with consciousness. It doesn’t matter how big you make your data-cruncher; it’s just a data-cruncher.

But the belief that it will happen is real. This evidence-free fantasy drives the hopes and fears of our generation. AI proponents deliberately play on this, amping up the fears while hyping the possibilities, all the while lying about ethical responsibilities and legislative guidelines. Don’t believe them.


This evidence-free fantasy drives the hopes and fears of our generation.



Thanks so much for articulating this so well. I have always thought machines might be considered intelligent, even if they are not conscious. But you question this, and think rightly so. Intelligence - at least the human variety - is intrinsically connected with experiencing and feeling the world. It is related to wisdom and ethics, both of which will forever be beyond the scope of machines.

A computer is no more than a fancy way of moving electrons around. In effect, it is no different from an enormous tangle of water pipes and valves. The idea of consciousness arising from a machine is no different from claiming that consciousness will emerge once a plumbing system reaches a critical threshold of pipes and valves. Just keep on adding bits, and consciousness will eventually pop out. Does anyone really believe this?


Unfortunately, many people do!

I really appreciate Bhante Sujato pointing out how much ‘spin/value’ is being added to the idea of LLM/AI but the dodgy sales pitch which comes along with thm. Simply by referring to these systems as AI, rather than data-regurgitation models, we are giving them value and credibility which they don’t deserve.


Oh, absolutely! Sam Altman wants $7 trillion to build a better chip for AI, and power it with fusion. What for? It’s not a better chatbot, that’s for sure.

You’re an engineer, you know what a trillion is. But for most people it’s just “bigger than a billion, I guess?”

In subsequent articles I get more into the truly fantastical worldview of the AI gurus.

I think one of the huge problems is that normies just don’t even vaguely grok what these people are about. When I talk about this, I ask, “what do you think they want?” And they say, “Money?”, “To spread knowledge?”, “power?” And I’m like, “what if I told you that they want to live forever and rule the galaxy?” And they look at me like I’m nuts. Meanwhile, when you sign up for wifi via Musk’s Starlink, you explicitly make legal consent for his colonization of Mars.


Yes, of course they do. I am just trying to make the point that when you shift your perception from the black box of a computer to an open system of pipes and valves, the illusion of sentience is easier to see through. Good similes are powerful!


Currently listening:

1 Like

Thank you @sujato.

Well articulated, as always.

I liked that you related it to the khandhas. I feel AI in its current state amplify and reinforce the khandhas.

As you said in a different post, tools shape us. As Metzinger points out, our sense of “self” extends to the tools we use. We don’t feel our fingers holding the hammer striking the nail, we feel the hammer is an extension of our self hitting that nail. Similarly our sense of self extends to the car we are driving or the bicycle we are riding, we literally feel it is us rolling on the road.

And we naturally view AI as an extension of ourselves - we imbue it with a sense of purpose, cognition, intelligence and many attributes it may or may not actually possess. By it summarising and extrapolating and translating Buddhist scriptures, we feel it is ourselves doing that. Or that it is doing it on behalf of us.


R. Daneel Olivaw

1 Like

I’m not sure exactly where to drop this, but it’s a great article from a perspective I hadn’t considered:

Advances over the past year in the misnamed field of “artificial intelligence” have activated the inverse form of the heuristic that haunts so many disabled humans: most people see the language fluency exhibited by large language models (LLMs) like ChatGPT and erroneously assume that the computer possesses intelligent comprehension — that the program understands both what users say to it, and what it replies.


Wow, may many readers & listeners who seek a beautiful, nuanced description of consciousness find this AI-3 post. :pray:t3: Gratitude for the great SuttaCentral search engine!

This distills, for me, the essence of the craving if left to complete mindlessness. On the client side.

If one lacks capacity for mindfulness due to a disability, then they need mindful people around them to expose this and lead them out of confusion.

This re-emphasizes the proposition that, by and large, we can’t trust the server side to put in adequate guard rails as they have no incentive to do so. So, we have to invest in the client side. (This has been an interesting side-thread by a few folx.)

:pray:t3: :elephant:

1 Like

I had a formative chat once with a Filipino who voted for Bongbong Marcos. I was incredulous. “After what his family did?” His response floored me: “Oh? You believe they did all those things they’re accused of? Then you’re more gullible than I thought! Nobody is that evil.

Most people have no understanding of what the human mind is capable of.

Under Neoliberalism (current hegemonic ideology), the only capacity that matters is economic: inputs and outputs, consumption and production. And machines can do that.


Ha ha, may you live forever, maintaining the Three Laws!

Thanks! It really is a great article, one of the best I’ve read.

One of the few good things to come from all this is that I have now heard the phrase, “stochastic parrot”.

In fact I should use this as my tag for these essays.


Right, and the “successes” of AIs are always measured in terms of their outputs. In real life, the output is often the least interesting part.

This comes back to what AOC said some years ago, in one of the incredible mind-meeting moments that used to make twitter worthwhile.


Sorry if this is a naive question, but wouldn’t this still allow for AI to be a vessel for consciousness in the future? Just like neuralink would allow interaction through “thought”. Couldn’t it be just another way of the mind increasing its influence in the world and getting what it wants through technology?

I sort of did, I mean in a way that it would be indistinguishable from the outside. I think It will be possible to mimic most of brain activity, even if that artificial “mind” would not be able to create things through its actions or affect its “mind states”. Which makes the scenario of autonomous agents more dangerous as their actions would not really have any consequences for them.

One major difference between current LLMs and how the mind works according to the understanding of us lay people / science is that LLMs have no issue with having opposing views. In fact, they hold ALL the views at the same time: it can argue about the earth being both flat and round, as both of these conversations were part of its training data. It’s up to the user to get it into a “mode” of answering the “right” way.
Nevertheless, transformers work nothing like the brain, the only thing that slightly resembles basic neuron functions in these architectures is still just sigmoid(Ax + b) and I think most researchers agree that simply scaling up is going to hit a limit (at some point in the future). I think there is a difference between selling “AI” by making these claims and working on it while having unrealistic expectations / ideas of anthropomorphisation, which helps you to remain on the field and work on these topics.

I really enjoyed this essay and it caused me to reflect on my life choices more deeply, but please don’t say this Bhante. This description would fit every single engineering area without exception. Working for a car or aircraft company? Congrats on being a war criminal. Working for a company that is slightly larger and been around for a longer period of time? Congrats on being a collaborator! I mean I wouldn’t be able to even work selling Cheetos chips at this point, because its just leftover cheese from US army rations.

I made the conscious effort to avoid working as a data scientist for banks, marketing institutions, areas where you work on getting kids hooked up on gambling mechanisms, etc. I admit I once worked for a porn company for a really short time because it was the only place that could actually afford the technology, would correctly pay all taxes after me (that was really uncommon at the time in my country) and it did not require me to lie about the limitations of the technology (which is still very common, with quotes from Musk and Altman). I saw that as an only option between multiple bad choices, but hated every single day of it. Still, it allowed me to get hired at other places where the prerequisite was knowledge in these technologies.

Geoff Hinton emigrated to Canada just so he wouldn’t have to take funds from Darpa anymore. He also resigned from Google to raise his concerns and awareness on the dangers of AI. Andrew Ng has educational video chapters dedicated to ethics alone, where he asks the viewer to say no to shady requests just like he and his team has done so several times. Karpathy also asks the viewer to not get involved in research that is immoral. He left both Tesla and recently OpenAI.

I got here because a machine learning model on YouTube decided to recommend one of Ajahn Brahm’s talks for me during one of my deepest times.


There’s a reason I’m a monk now :face_with_hand_over_mouth:

1 Like

The mind is always in an interdependent relationship with physical reality, and that has always been the case. Introduce a certain visual stimus, a pattern of lights, then the mind’s perception recognizes it as “tiger!” then you feel fear, then you activate the body to run …

Some of the brain interface devices are incredible, they can accomplish amazing things. But of course they’re working off the brain, which is a physical organ, rather than the mind.

Good point, I will reword it.

The point I’m trying to make is that AI depends on scale, so small applications will be swamped by larger ones. This creates a relentless drive towards monopolization, which in turn means handing the reins to the biggest players.

I make this point more clearly here:

While I understand that any field has interconnections, this is another step down that road. We should be doing what we can to fix these issues, but we’re going the wrong way.

But doesn’t that support what I was saying? Good people are doing what they can to avoid these implications, and the only way that’s possible is to literally resign and emigrate!

I don’t think this is really what is happening in the field. We have a few big players with large models, but the overall trend is more going into the direction of smaller models for specific purposes. Since a year or so, instead of the powerful models getting more powerful, we see that the open source models catch up with the powerful ones.
I am actually rather optimistic when it comes to the future of AI research. I am happy that we hold the SOTA of Sanskrit, Buddhist Chinese, Tibetan machine translation and segmentation/tagging/grammatical analysis with a considerable margin all with models that are <5B parameters in size.

1 Like

Time will tell, I guess. There’s definitely pressure to optimize for efficiency as big players run into limits of scale. Still, the massive investments seem to be going towards scaling.

The entire Western materialist “scientific”* theory is that there is some “primordial soup” from which consciousness just “pops out”! I liken it to Aladdin’s lamp and the Genie appearing
(when I talk to my “scientific minded” friends I first ask what they think of the Aladdin story and then how they explain life appearing and only then ask them how those two stories are different and how that can be considered “scientific” LOL! :slight_smile: )

To be the devil’s advocate here, I agree consciousness doesnt just pop out (as nothing in science appears without a preceding cause). However, as @Sujato expressed it as “a stream that flows from one life to the next”, isnt it perfectly reasonable to postulate that if the mix of wires and pipes was capable of “housing” that stream, just as the embryo is, that a being with volition could appear in that machine? I would then call it a form of life rather than an inanimate object, but see it no differently as any other. A being can generate volition whereas everything else cant, but does it really matter whether that being is “housed” in a carbon based rupa versus a silicon based one?

…just some incoherent posturings but I have heard a monk suggest we are born in the car, drive the car and die in the car, so we always think we are the car but we dont see that we are actually something else inhabiting the car and this seems true to me!

1 Like

…and the machine wont get the answer wrong deliberately as it doesn’t want to appear too smart in front of a prospective partner like humans might :slight_smile:


I mean this is a very speculative idea, since when speaking of rebirth it has always been quasi-organic in Buddhism.

I personally think that current generations of computers and ML will never become conscious.

Thinking further afield, what of quantum computers? Hard to say, but since they rely on the fundamental ambiguity of quantum states, they do seem somewhat more mind-like than regular machines.

It used to be so much more fun to speculate about these things! Ideas are good! Now things are getting too real, and speculations bear the weight of too many consequences.