Did the Buddha invent computer technology?

While most people know of the Buddha as an exemplar of peace and wisdom, and look to him for spiritual teachings and guidance, not many people appreciate how he anticipated many of the foundations of computer technology. As someone who works every day on Buddhist texts in a digital context, I can’t help but see a whole range of fascinating parallels, and in some cases, even historical precedent. Let’s see some of the Buddha’s ideas that we can find in modern tech.

Binary code

The most fundamental principle that all modern computing technology rests on is binary code. A switch is on—current flows; it is off—current stops. From this basic function of on/off = 1/0 all our modern computers are built.

As is well known, the number zero was unknown in the West. The reason it’s not noticed is not arbitrary; it’s based on a fundamental principle of consciousness. We notice what is there, and especially, what moves. It takes effort and training to recognize absence, and even more so, to notice that absence is every bit as important as presence; in fact, that presence only has meaning because of absence. This is explained in such suttas as the seven elements at SN 14.11:

The element of light appears due to the element of darkness. … The element of the dimension of infinite space appears due to the element of form.

It is due to the Buddhist emphasis on emptiness that Indian mathematicians developed the notion of zero, and from there, exported it to the West. Once the idea of zero was established, it made possible the development of “codes” which allow for the translation of more meaningful expressions into binary form. And that, in a nutshell, is how computers work.

Quantum computing

I don’t want to give the impression that our computing technology has fully caught up with the Buddha—far from it! For the Buddha was not limited to mere binary logic, but frequently used the tetralemma, based on a four-fold logic:

  • A
  • B
  • A + B
  • Neither A nor B

This set of options extends the scope of binary logic, allowing for the possibility of shades of grey; that things in the world are not always reducible to one thing or the other, but allow for a complex, irreducible superposition of states. And this is the fundamental principle of quantum computing. Leveraging the ambiguity of the superposition of particles, quantum computing holds the promise of a, well, quantum leap in processing abilities; and perhaps, even a shift in the kinds of problems solvable by computers.

However, despite their great promise, quantum computers are still mostly theoretical, and practical demonstrations have been modest. Still, this is the most advanced frontier in current computer development.

Note that even quantum computers still don’t catch up with the Buddha, for they ignore the last item of the tetralemma. The idea of a state so subtle that is not even definable by negation or superposition is not, so far as I know, something that has even occurred to modern tech.


Modern development of AI is driven by the assumption that “intelligence” or perhaps even “consciousness” is not a metaphysical property, but a conditioned phenomenon. In this analysis, the Buddha too led the way. He rejected all metaphysical explanations of consciousness, and dedicated much of his life to showing how consciousness arises dependent on conditions.

Of course, the nature of those conditions differs. IT tech assumes that intelligence is a purely physical property that will emerge from hardware and software configured properly. It should go without saying that this is a sheer assumption, and there is no evidence for it whatsover.

The Buddha rejected both metaphysical accounts of consciousness and materialist reductionism. He regarded both approaches as utterly inadequate, and failing to answer the question in any meaningful way. Instead, he pointed to experience. He looked at the conditions prevailing when actually experiencing consciousness, and saw how these could be trained and developed in a positive way, to overcome suffering. Rather than seeing mind and body as separate, or rejecting mind altogether, he saw the primary reality as the interconnection between physical and mental properties.

I think that the current approaches to AI will fail to develop anything even vaguely like “strong” AI; and I think the IT world doesn’t even really have any clear idea of what “consciousness” is. Nevertheless, there is no reason in principle why an artificially developed machine, a “computer” if we still call it that, should not support consciousness in a way similar to how the body does. Whether this ever happens is unknowable. Which brings us to:


It’s tempting to think that we can have a free and final knowledge of all things, or at least, of all things we want to know. And the metaphysical basis of most religions is the assertion that they possess such unique and final knowledge, a source of solace for devotees. Yet the reality of our lives is far more limited than that, defined as much by what we don’t know as by what we do.

The quest of science was to take such claims out of the metaphysical realm and give them substance. By applying the methods of experimentation and inference we could learn all there is to know.

However, early in the 20th century a series of developments in science and philosophy put paid to this idea. Heisenberg realized that the position and momentum of a particle could not be determined with precision simultaneously: quantum uncertainty. Wittgenstein said “Whereof one cannot speak, thereof one must be silent.” Gödel showed that no complete and consistent set of mathematical axioms was possible. And Alan Turing, the chief architect of the modern computer, applied this notion to his idea of the “halting problem”: can we determine with certainty from a description of any computer program and its input, whether the program will finish running or continue to run forever. Turing showed that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.

The acceptance of the limits on knowledge seems like a defeat. Even though everything we know, in fact, tells us that knowledge is limited and uncertain, we still long for certainty. And it is this gap that religion, and formerly science, claimed to fill. But the paradox is that once we accept the limitations on knowledge, we are free to focus on the kinds of knowledge that are actually achievable: real knowledge. And so the limitation of the halting problem lead directly to the invention of modern software. So that fact that we can, today, google whatever subject we like and learn about anything is a direct result of accepting the limits of knowledge.

In the same way, the Buddha declared that certain things—most famously the ultimate origin of the cosmos—were unknowable. Rather than being a limitation of his philosophy, he saw this as freeing us from irrelevant distractions so we can focus on what is both knowable and important: the end of suffering.


Computers are great at generating huge stacks of data, but something that has to be handled without choking all the systems. We do that by using compression. The idea is that data contains a lot of repetitive stuff, and we can express that in a simpler form. Take a simple list:

1 + 1 + 1 + 1 + 1 + 1

We can express this more concisely by:

1 × 6

And that is essentially what compression does. It minimizes repetition and maximizes uniqueness of data.

This is something that we find constantly used through the Buddhist texts. They use signs, most commonly pe, to indicate points of elision, where text has been abbreviated and is to be expanded in full by the decompression algorithm, AKA the monk or nun doing the chanting. There are, based on a quick search, over 40,000 such instances in the Pali canon.


The flip side of the compression is the repetition. There are relatively few truly unique phrases in the Pali canon. Most things are repeated, either verbatim or with small variations. Important teachings are repeated many times. The first phrase of the jhana formula (vivicceva) is repeated 375 times, for example.

All this repetition serves an important purpose: preserving the integrity of the data. Important files are backed up in multiple locations. Here, “multiple locations” doesn’t mean hard drives, but, originally, the memories of reciters, and later, manuscripts. This creates a highly resilient system, which can—and has—survived massive outages across much of the network. Which brings us to:

Distributed networks

These days, our computers mostly rely on centralized servers, huge banks of computers in dedicated facilities, which run the internet and much else. For a long time, though, advocates have pushed for a more distributed model. Under such a system, each computer would not be simply drawing from the server, but would itself act as a server, contributing to the network. Such a system has the advantage of not being centrally controlled, and hence being far more resilient. Today, advanced systems such as Ethereum are beginning to implement these ideas.

But the Buddhist world has always relied on distributed systems. The texts are not considered as being under the exclusive control of any institution. Rather, the institutions contribute, through training, education, and resources, to maintaining an open, flexible, and distributed grid. In ancient times, monks and nuns moved freely from one monastery to another, taking their texts with them. Later, different monasteries would maintain scriptures, which again would move from place to place.

Open source

Hand in hand with the notion of a distributed system is the concept of open source. Before the Buddha, the primary model of text ownership was that of the Brahmins. Believing themselves to be the chosen custodians of the literal expression of God’s word on earth, they jealously guarded the Vedas and passed them down in secret. That way they could control the texts, and, crucially, monopolize the revenues they gained from performing the Vedic rites.

The Buddha criticized this and frequently endorsed and advocated for an open source approach. He made all his teachings available for everyone, and expressing denied having the “closed fist” of a teacher. He encouraged open participation by all members of his community, explicitly stating that monks, nuns, laymen, and laywomen should gather and recite his teachings in harmony.

Today, the open source model has become one of the most distinctive features of the computing world. Despite the fact that we live in an era of unprecedented corporate overreach and intellectual copyright litigation, open source software has grown to dominate. As just one example, the most significant open source project, Linux, now runs almost all the internet, almost all supercomputers, most mobile phones and tablets, and a large amount of embedded computers. After years of trying to extinguish open source, even Microsoft has capitulated: you can now install Linux from the Windows app store! And Linux is far from the only example. Discourse, the forum platform you are using, is open source. You are probably reading this in Chrome or Firefox, both open-source browsers. We use open source software every day, and we don’t even notice it.

This hasn’t happened because of a philosophical love of fairies and unicorns. In fact, it has happened despite a determined opposition by corporate interest. It has happened because it works: open source, in many areas, is simply a more effective way of doing things.

Version control, forking, and merging

In a world where complex software is often maintained by large groups of people across the world, managing the different versions becomes a significant problem. The same happened in the world of Buddhist texts. When monks and nuns spread out across India and further, taking texts with them, it became increasingly hard to maintain the consistency and integrity of the Dhamma, AKA the code base.

These days, we mange this using version control systems, of which Git is by far the most successful. And yes, Git is open source. Git allows for code to be forked (like when a nun sets out from a monastery for distant lands, taking the suttas in her mind) and merged (as when she arrives at a new monastery, and recites the texts she knows alongside the resident nuns.)

Sometimes this system breaks down, and a schism or split appears in the Sangha. Just as in the modern day, sometimes groups of developers cannot agree on how they want to take the project forward, so they fork the code and develop it independently. This has happened repeatedly in Buddhism, with the emergence of various sectarian canons. Nevertheless, while the texts diverge in countless details, it is obvious to all who have studied them that they hark back to a common “code base”, which must be the Buddha’s teachings as collected by the Sangha after his passing, before the “forking”.

In the Buddhist traditions, as today, forking is traumatic for the community. It’s rarely if ever just an even-handed assessment of different priorities and directions, but is driven by different values. So forking is often perceived as a bad thing, which consumes unnecessary energy and work. While there is of course some truth to this, it is also the case that forking can add additional resilience to a system. Sometimes it is not really clear what is the right approach, and only by trying it out can we see what works.

Standards bodies

One of the ways that the variations in code are kept in check is by developing open standards. For the internet, for example, standards are developed by the World Wide Web Consortium. This defines the meaning and intended use of the building blocks of the internet, providing a framework of reference for developers. However, such standards are not mandatory. In fact, software frequently, perhaps normally, doesn’t fully meet all standards. Rather than taking a punitive and controlling approach, standards bodies aim to influence in a positive manner by making development and interoperability easier for everyone.

This is similar to the role of the Sangha as traditional custodian of the texts. Again in contrast with the Brahmanical approach—which recommended pouring molten copper in the ears of a low-caste person who listens to the Vedas—the Sangha acted as a centralizing influence, holding Councils to standardize the texts. But nowhere, historically, did the Sangha assert unilateral control over the texts or punish other uses.

If we look at how the different canons are organized, we can see that they all use similar structural principles: Agamas or Nikayas of the Digha, Majjhima, and so on; division into samyuttas, vaggas, and so on. There’s also similar narrative conventions, introducing each text with a location, and so on. And yet the specifics of each canon are quite different. Clearly, if the job of the Councils was to create a single letter-perfect edition, they failed. On the other hand, the canons that have emerged do share a similar nature to, say, different documents produced to the same standards. While not identical, their common features are easily discerned, and the differences are rarely critical.

Data and community

There is, I think, a more subtle way in which computing and the Dhamma relate. Both, in their more technical aspects, tend towards a certain reductionism, even an inhumanity. In its extreme forms, the Dhamma reduces humanity to a buzz of conditioned energies, devoid of meaning or significance. Similarly, technology has a dark side: it encourages a culture of not caring, of hate and abuse, such as the users of the internet are all too familiar with. Once people are reduced to images on a screen, or even binary bits, it becomes easy to treat them in ways that we’d never treat a human being beside us.

The computing world tries—with limited success—to overcome this through connection and community. There is a recognition among some programmers that their work is inherently communal, that all code relies on other code. In any software project, managing the team is as significant as defining the software.

Similarly, the Buddha laid great emphasis on community, love, and mutual support. He famously said that good friendship is the whole of the spiritual path. Countless teachings are aimed at explaining, not just how the mind works, but how to put that understanding to good use in developing close and supportive community. While it’s important to understand how nuts and bolts work, it’s even more important that we don’t treat each other like collections of nuts and bolts.


As a long time computer nerd and Buddhist geek this was a fascinating read. I think one of the Sujato powers is the ability to take an immense breadth of knowledge not immediately related to dhamma — and to speak dhamma through it.

Having heard of your success in completing the initial translation of the 4 nikaya project, I hope you enjoy the vipāka due of this tremendous kamma. Thank you for all your contributions. May you be happy and well, bhante.


Haha. That was delightful read. Indeed, there are some funny parallels.

This kind of reminded me of the following:

Process and Emptiness: A Comparison of Whitehead’s Process Philosophy and Mahayana Buddhist Philosophy

For those unaware, Whitehead and Bertrand Russel worked on something intimately related to Gödel’s work.

Now, considering that title above, it might be worth mentioning that computer science is, essentially, a science and study of processes. With that posited, another topic of Bhante’s essay could be that “we” are not fixed entities, but processes (“a running software”, or more precisely, a self-modifying program).

And the Internet can serve as great illustration. Like us, it is “running” for decades, and even after all of it’s devices connected have been substituted through time (like, say, the cells and atoms of our body), we still think of it as “the same internet”, though perhaps somewhat different of what it was in the past (interestingly it has been said that it is the closest to a living organism humans have ever constructed).

But back to ancient influences, particularly of India on science, Dr. C.K. Raju’s work might be interesting to look at. He is certainly a controversial character, though (personally, after reading a few things by him, I don’t see the vulgarity that some accuse him of, though I see valid reasons for controversy). For one of his works related to Buddhism, see:

Ātman, Quasi-Recurrence, and Paticca Samuppâda


That’s all I needed for my homework… :grin:

1 Like

Wow, this is great! So cool when an old thread pops up in the suggestions. They’re like easter eggs.

It reminded me that infinity as a mathematical concept has been consistently part of South Asian mathematics since the Buddha’s time and only gradually became understood in the west.

This is actually a very well reasoned and interesting paper. Thank you for sharing! :slightly_smiling_face:

1 Like

I thought about the parallels between the human mind and computers quite a bit a few years ago when I was experimenting with writing scripts that (attempted to!) parse Chinese and provide at least a working draft to start with for a translator.

The thing that’s missing in computers is meaning. At the end of the day, they are calculators. We’ve figured out ways to convert numbers to various things that’s meaningful to humans, like imagery and language, but computers are only calculating numbers behind the scenes. So, when it comes to language, they have no idea what anything means beyond the numbers the software uses to represent characters.

Computers and the human mind are similar in that memory acts like a fuzzy database. The data degrades quickly, so it has to be maintained with frequent repetitions, which is why we see repetition in oral traditions. The pathways are continuously being refreshed by repeating the words, and consistency between different people’s memory is maintained by communal recitation.

Keywords access larger chunks of data in the human mind the way they do in a computer database. That’s basically the way stock passages in Buddhist texts work. A keyword is hit and it triggers the recall of a memorized passage, which can be quite large (like an entire sutra recalled from it’s title in an uddana verse). It’s like verbal cut & paste.

Human memory, though, is more complex that a computer database. Language consists of more than just sounds and words. There’s a layer of meaning that underlies that data, and that’s what’s missing in computers. They have data but not meaning.

People often seem to fall into computer-like thinking, obsessing over data (pronunciations and words) rather than meaning. Sometimes, I see people who insist that data and meaning can only have a one-to-one relationship when it clearly doesn’t. It’s as though they’d rather the meaning layer of human language would disappear or at least simplify to the point that it’s obviated away. Then they could treat literary or religious texts like simple binary data.

Or, maybe people sometimes have a difficult time realizing that language exists on both personal and communal levels. People can understand words how they like and often do. It’s only when they communicate with other people that the data and meanings have to line up to some degree. But not 100%. Just enough to communicate.

That I think is what makes language complex and resistant to computerization. To computers, everything is a number, and numbers are absolute values. In human language, words and meanings have fluid relationships that are relative and change over time and from person to person.


It might not always be missing, tho. Can you say a bit about what you mean by “meaning”?

Sure. Meaning is hard to think about outside of language, isn’t it? I mean, we can relate words to each other, but there’s something underneath them that isn’t verbal. Otherwise, how do animals without language think? They clearly do. So, it seems to me, language is an added feature used to communicate, but basic meaning is pre-lingual. Those basic meanings and words then get combined to make more complex and abstract meanings.

So, an example of basic meanings would be the experience of colors. We attach those meanings to certain words to refer to them, but different languages and people disagree about where exactly the lines are drawn on the rainbow. Some languages lump blue and green together, and people actually see shades a little differently. So, people sometimes disagree about where blue becomes green.

There’s more complex meanings that are experience-based, too. Take as an example the words “crash up derby” (or “demolition derby”). It means a game in America (maybe elsewhere?) in which people crash old cars into each other in a playing area until only one car can move. The playing area is fairly small and made really muddy so the players can’t reach very high speeds, and everyone drives in reverse. The cars are made as safe as possible with glass removed and added “armor.” It was fun to watch crash up derbies at the local fair every summer when I young.

Each round would involve a half dozen cars. At first, there wouldn’t much room to maneuver, so when a car managed to hit the front-end of another car, everyone would cheer. The radiator was the main target; a solid hit would mean that car would be out soon when the engine overheated. When only a couple cars were left, there’d be enough room and the mud would be flattened enough that a car could pick up more speed, meaning the round would be over sooner than later.

So, that’s what “crash up derby” basically means to me. That meaning is probably somewhat different to me (a positive set of fun experiences watching from the audience) than to another person. Maybe the rules are different in different places, or a person just doesn’t like watching people (usually men) ram cars into each other.

We can find an objective, shared meaning in a dictionary using words.

Merriam-Webster says it means:

  1. a contest in which skilled drivers ram old cars into one another until only one car remains running
  2. something that resembles a demolition derby in destructiveness

Pretty stripped down meaning, but it works for those who don’t have the experience. It also point out the way meanings are borrowed and applied to similar situations metaphorically to create new meanings. I might say, “That square dance was a demolition derby!”

Now, the words “demolition derby” mean people crashing into each other instead of cars. Maybe they all fell down until only one was left standing. So, context matters, too, when we make meanings from words. Dictionaries can’t anticipate all the meanings any given word can have in every sentence because of contextual meaning. They just give us the basics. The reality is that we don’t know precisely what words mean until we see them in a sentence. Whole sentences combine to make a single meaning from all the words. It’s pretty complex when you really study it like you do when translating between languages.

Then, sometimes, words refer to other words abstractly to simplify things. “Five aggregates” means five other words and their meanings collectively, as an example.

When I was attempting to write a script to translate Chinese, I ended up attempting to write a bunch of rules for how to read characters depending on the characters around them. Written Chinese is pretty challenging to translate with software because it’s so word-order dependent. You can’t tell basic things like parts of speech without looking at its place in the whole sentence and the words it’s used with. I gave up pretty quickly. I decided my time was better spent translating than trying to build a model of what I do when I read classical Chinese.


I’m giving all due consideration to your post before responding in full, but I’m giving into the urge to respond impulsively to one facet…

You talk about crash-up/demolition derbys (I’ve been to my share!) as well as the 5 aggregates, and I’m reminded of Thanissaro Bhikhhu’s (Ajhan Geoff’s) more vernacular translation of the skandhas as “heaps.” For me, this connotes just a pile of stuff, only loosely diffentiable from the stuff around it.

What’s an old beat-up car but a “heap”? :joy:

1 Like

Yes, you specifically allude to what’s sometimes known as the “grue” hypothesis or paradox. The curious may wish to investigate this on their own, but personally I’m delighted you brought it up since this is related the the Sapir-Whorf hypothesis – a concept I once brought up in a group discussion with Bhante @sujato who without hesitation challenged the validity of the idea. It made an impression! I thought I was getting away with being clever, but so much for that!

Anyway, after furiously cycling though every related philosophical concept I’ve ever been exposed to and quickly refreshing myself on those that seemed most relevant, what I come back to is that meaning and consciousness itself is an emergent phenomenon. Where this leaves me, philosopically, for the sake of this discussion, is that while machines may not yet behold or convey meaning, it does not follow that they may never do so

1 Like

I agree. And maybe this will be a matter of practical importance at some point. I just think that what computers are doing now has nothing to do with such questions; we are reading patterns in the clouds.

There is no road from here to there.


Obviously, the Buddha was also the greatest Debugger of all time!


Yeah, I would say that the effects of language on how people think would fall into what they call “edge cases” in IT. Edge cases matter more in computer systems that human ones because computers can be very rigid and fragile in how they handle unanticipated situations. Humans generally shrug and move on.

Since we think with words, our logic uses those words as building blocks, but it’s not so rigid as limit what they can think. As I showed above, the meanings attached to words are loose and flexible in practice, so it’s not hard to get around basic meanings to create new ones. There’s also the fact that languages don’t exist completely independent of each other. Civilizations have been sharing their words and ideas with each other since ancient times. And that’s accelerating with the global community that arose in the last century.

I think researchers are working on trying to build software systems that model what we think the brain does. Maybe they’ll figure something out that gets close to consciousness. Computers themselves, though, have been getting faster and faster by making the same basic calculator circuitry smaller and adding more and more parallel processing with more and more bandwidth. Unless consciousness is the result of sheer speed and data, they aren’t getting any closer to it that way. Maybe quantum computers will add something new? As an avid sci-fi reader, I have to allow for new things to change what’s possible as a matter of principle, but it’s like faster than light travel. Maybe it’ll be possible someday, but nobody knows how at this point.


Michael Radich points out, in this paper on the idea of immortal Buddhas, this passage from Pure Land Sutras T361/2 which might be the very first definition of computational intractability:

The Buddha said to Ananda, “The length of the life span of the Buddha of Measureless Life cannot be calculated. Do you want to know to what extent? If, for example, all the numberless living beings in the world systems in ten regions of the universe were to obtain a human body and were all caused to be in full possession of the state of a disciple or solitary Buddha, and if they then all gathered and assembled in one place and in deep meditation single-mindedly used the power of their knowledge to determine the length of the life span of this Buddha, and, during a hundred, a thousand, or ten thousand cosmic ages, counted, all of them together, they would not succeed in knowing the limits of his life span, even if they counted for many cosmic ages."


And when I read SN12.23, the thought arises…“well so THAT is how a neural net works…”

1 Like

??? Can you explain it for the sake of us linear old computers? I don’t see what quantum leap you did to solve this NP-Hard riddle :joy:


My iPhone has a neural net that recognizes my face. In other words, my iPhone has a sight sense field that generates a contact. That neural net has layers that each recognize such things as vertical or horizontal lines. Each layer therefore embodies name and form.

Indeed, neural nets require a succession of layers. Name and form are vital conditions for consciousness. Consciousness is a vital condition for name and form. Layers are vital conditions for layers. Each layer therefore embodies rudimentary consciousness. A neural net that recognizes my face is rudimentarily conscious of my face.

Neural nets have to be trained–they have to be conditioned. I have to show my face to the iPhone. Training a neural net requires feeling/assessment and intention and choices and grasping. A neural net grasps by means of its “objective function”. An objective function reinforces positive matches (i.e. that’s Karl) and calculates gain/loss with respect to intention. An objective function therefore feels and grasps.

Twins can generally open each others iPhones. I would call that suffering based on delusion that arose out of grasping. :laughing:

DN1:3.74.3: “Well, then, Ānanda, you may remember this exposition of the teaching as ‘The Net of Meaning’, or else ‘The Net of the Teaching’, or else ‘The Prime Net’, or else ‘The Net of Views’, or else ‘The Supreme Victory in Battle’.”