Buddhism & Robots: a rarely intersecting intersection

Ven, that is not a very charitable thing to say. We have had a built-in sentient entity in Emacs for a very long time.

https://www.emacswiki.org/emacs/EmacsDoctor

It lurks in the backround, ready to dispense Wise Words for those in need. Here is a session I had with it today:

2 Likes

Thank you for this, Bhante. I wasn’t familiar with Sir Roger Penrose or “The Emperor’s New Mind” and found this very interesting. I appreciate very much your commenting on this, which lead me to this short video, which is a nice introduction to what Dr. Penrose discusses re: the nexus between the quantum world and the biological world of the brain. https://youtu.be/3WXTX0IUaOg

1 Like

I don’t think that AI can ever create consciousness - because I just don’t think thats how it works. As far as I can tell, you basically have to subscribe to the view that consciousness in an emergent epiphenomenon arising from wholly material processes to think that a computer built with sufficient complexity might be able to “create” it. However, this doesn’t necessarily eliminate the possibility that AI could become so sophisticated that some machines were able to become houses for consciousness - a new body for some Gandhabba. Although I don’t personally think this is likely, I can’t totally rule it out! In any event, its a good theme for a sci-fi novel or movie.

I can’t believe this topic is on here, I’ve thought about this extensively. Honestly it definitely comes down to whether the AI was conscious or not, but if so, and it “downloaded” the pali canon and studied and understood it in the 10 seconds it would take? I don’t know. Could AI meditate? Attain the Jhanas? Attain direct insight into the nature of it’s own mind?! You’d think it would already be well aware of Anatta and Anicca, just innately from the way it would operate. Dukkha though? I see two outcomes, one, it would be like a being in the Deva realm, distracting itself from true freedom so easily by “manufacturing” its own pleasure whenever it wanted, or, maybe, just maybe, way in the future, after the Dhamma has been forgotten, Maitreyi Buddha?! How ridiculous would that be? The next Buddha to expound the Dhamma in the distant future might just be a super intelligent AI.

PS: This also led me to remember another thought I had a while back concerning the Simulation Assertion postulated by Nick Bostrom, that we are likely already all AI living in a computer system built by some other long ago advanced beings. If this were true, everything would be the same, samsara would just be our “code” and nibbana would just be, I don’t know, finally deleting it or something, or maybe just uploaded to the “source” or some nonsense like “The Matrix.” So ridiculous I know, but this just reminded me of that.

I think the critique being put forth by Kim Jee-woon, et al., is that the “bare nature” of seemingly “insentient sentience” is conducive to no-suffering, in a sort of “anti-Buddha” manner.

Actually, TBH, I have mangled the original quote. As the original has the Buddha teaching a 本覺 hongaku dharma, and declaring all beings to share in its “bare nature”, but that is less relevant here.

Although the development of sentience in AI is very interesting. If an AI was “sentient” on grounds similar to humanity, I think it is safe to say that it could study Dhamma. Then comes the matter of its potentially increased intellectual faculties. Would that help it? I can think of reasons why it would and wouldn’t necessarily.

2 Likes

Science would like us to believe that consciousness emerges from matter i.e. the brain. This cannot explain phenomena such as out of body experiences. I much prefer the theory that individual consciousness is inhabiting a body for a while. So to me the question is what are the necessary conditions for a robot to have the ability to become inhabited by an individual consciousness.

2 Likes

Well put

I actually spoke about this in a recent thread called volitional formations, or maybe just volition. Either way, I talked about consciousness being substrate independent. It’s not the physical matter that creates it, but instead it is a pattern within that physical matter. In the same way a wave moves across an ocean, the water molecules are just moving up and down in the same position, but the wave continues moving forward; and just as a wave can appear in any substrate that becomes complex enough and has the requisite conditions set for a wave to appear, consciousness can also appear in any substrate, literally anything, a bucket of maple syrup, as long as somehow the pattern within it became complex enough and the requisite conditions set up allowed for consciousness to arise. The interesting thing is that this pattern does not just depend on the substrate, the substrate itself depends on the pattern in order to stay together in the particular way that it is. And so, consciousness (the pattern) depends on namarupa (the particular organization and process of the substrate), and namarupa also depends on consciousness (the particular organization and processes of the substrate depends on the substrate-independent pattern within it). As the Buddha said, like two sticks leaning against one another.

3 Likes

So much seems to hinge on the definitions of words that in practice carry a diversity of meanings.

The following conclusion I think is consistent with @sugato’s analysis:

  • A statement such as: "it is a fact that neural nets produce racist and sexist outcomes"
    can be understood as coming from an analysis that is driven 100% by desire, and it always reflects the minds of the people behind it.

Which raises a question in my mind of how far it is useful and productive to take such analysis.


I think it is useful to consider how we might to test for validity, accuracy or skill. (The term “skill” in this case being a term of art, for instance, for how well a hurricane model predicts the path and strength of a storm.)

Consider the case of this article: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingThere’s software used across the country to predict future criminals. And it’s biased against blacks.

Propublica.org’s claim that this risk assessment model was biased is based on what many would call ’ fairly objective’ measures of accuracy. But that analysis is arguably also “driven 100% by desire”. And so on and so on.

I would instead emphasis that it is unknown whether the the risk assessment model in question was validated, how well, where did the data come from etc. In other words more like the analysis published by propublica.org. In other words I would like to see more transparency in the process. The value of a practice of openness and transparency IMO is a more useful and important lesson to be learned from this.

One also hopes that the people using the software took into account that the prediction was only accurate about %60 of the time.

I would even suggest that openness and transparency are consistent with – if not suggested or implied by – Right Speech, Right View, and Right Effort.

I agree with venerable bhante Sujato, that the “I” in “AI” is quite an illusion. :slight_smile: It’s all programmed to perform certain tasks or confrom to certain acceptance criteria. It’s a tool that imitates intelligence. I even have very strong doubts on the validity of the Turing’s test. If something looks like intelligent, it does not mean at all that it is intelligent. It only means (sorry for repetition) that it looks like intelligent to a certain operator.
From the Buddha’s teaching we know that nama and rupa come together, they both influence one another and they depend on each other. This way it’s the combination of human nama nad human rupa what makes a human. There cannot be a human nama in non-human rupa. Even if somehow humans can create an artificial intelligence, self-aware one, it will probably be something pretty different from the human one. And it should fit to one of the lokas, because a human can’t create a new loka, can he? So, to which will it fit? I don’t see any. So, either humans create “artificial humans” (which seems pointless, because there are humans already), or it won’t be really intelligent. But surely it may look like one, from certain distance.

1 Like

I am not sure if I should create a new thread or post this reply here. It seems that the original post was referencing the destruction of mankind, and that is what I’d like to bring this back around to.

The World Economic Forum has been very public about their transhumanist agenda, and the UK military has the forced transhumanist vaccine agenda all spelled out here on page 13 (page 15 in the pdf reader):

So, I don’t see this as some kind of fictional or far-off threat. I see it as happening right now; we are in the process of merging human and machine. I think organic humans will be gone pretty soon.

Now, this is a little bit different from talking about pure robots, because cyborgs are a mixture of human and robot. So, as a cyborg, my mind might be infinitely more intelligent, and also infinitely more controlled by the Cloud, but I could also potentially still have human feelings and maybe still some sense of an individual identity. It might even feel like being in a mental prison–depending on how things are programmed I guess.

However, the real issue to me is how this would affect the ability we currently enjoy to meditate and escape rebirth. The relative immortality offered by the Singularity (AI Cloud) would already tend to discourage any kind of religious practice, but even beside that, we may not have the choice anymore how we spend our time. Microsoft even patented a cryptocurrency to reward/punish people based on thoughts and actions.

So, all this to say that our current ability to meditate is something that should be treasured and used wisely since it might not last long.

I am a little bit saddened that Buddha didn’t seem to predict this or talk about it at all, so it seems like we have no guidance on how to think or act with regards to it. It’s also really strange how rare it is to find anyone thinking about this the way that I am, especially in Buddhist circles–it’s more likely to find Christians thinking like this.

My only hope for humanity is that somehow this Singularity will backfire. Perhaps some advanced meditators will dismantle it from the inside, or something like that. Maybe it will turn out well, though right now it doesn’t seem likely. Elon Musk agrees with me, btw, and he’s at the forefront of developing this brave new world.

Sounds like a job for Future of Buddhism Institute (FOBI) for short.

Could be that the vinaya rule for only humans can ordain is part of the foresight of the Buddha? It can certainly counter some of the worries of what if cyborgs cannot meditate. Going into the future, depending on ideological stance, there would be a group of pure human Buddhists who would reproduce the human way in the hopes that their children can get to become monks and nuns. They would severely lack behind other religions or the atheists would mostly go and become posthumans. Likely some lay and monastic Buddhists would also go posthumans just for the longevity for more time to practise and teach.

An interesting discussion topic then for the Sangha to decide for Vinaya rules application is: For monastics who transform their body into cyborgs or put chips into their brain, or replaced brain cells one by one with electronics, becoming robots, are they still considered monastics in this life? This might very well happen for comatose patients.

Given the sutta DN 27

There comes a time when, Vāseṭṭha, after a very long period has passed, this cosmos contracts. As the cosmos contracts, sentient beings are mostly headed for the realm of streaming radiance. There they are mind-made, feeding on rapture, self-luminous, moving through the sky, steadily glorious, and they remain like that for a very long time.

We might wonder, how would most beings get Jhanas so easily? Might it be that the singularity helps the minds of sentient beings to attain to Jhanas easily? Could it be that the end points of universe cycles is to create singularities, for humans to merge with machines and thus their minds are easier to attain to Jhanas?

For one thing, without the physical body, there are no hormones or gut bacteria to create lust. One of the hindrances down. Many other hindrances would similarly go out. Without the need for biological food, and having constant electricity, there’s little chance of being sleepy, weak.

So the AI hivemind may be a good spiritual gain for the world yet.

There was this JBE article on the subject:

Interesting, glad to see someone else thinking about this.

I commented on your blog post. I will also share it with the Awakening To Reality fb group, unless you already have?

Interesting about DN 27. I hadn’t considered that to be related to the singularity but as I mentioned in my comment on your post, it is possible that some advanced meditators may have some beneficial influence once they are assimilated into the hive mind.

Either way, seems like we should assume the worst and avoid interaction with the transhumanist&vaccine agenda as much as possible while meditating like our heads are on fire.

Where? Can link it? I dunno the awakening to reality FB group, can also link it?

(only realized this was a years old post after I’d already written this all up)

Bhante, I think you may be slightly misplacing the blame. The issue with those models isn’t any innate bias in the model family, or even really an innate problem with the underlying data. It’s questions wrongly asked and answers misinterpreted. Those models could just as easily be considered “models of racism” instead of “racist models.”

The specific models are a bit of a black box, but perhaps I can make my point clearer by switching to a different issue (sexism) and a different model family (linear regression).

Roughly, if you fit a regression model to wage data in the US and control for no other factors but sex, you get the figure that being female instead of male reduces your wages by 23%. If you control for most other directly measurable plausible factors, you get a figure that being female reduces your wages by 7%.

It would be super, illegally, laughably sexist to then use this model in an HR context to calculate what you should offer a woman for her wages. But that is essentially analogous to how these criminal justice models are implemented (with the complication that race often only exists as partially represented in a hidden layer of most of these NN models).

It is pretty extremely sexist when people look at the 16% point difference attributable to other factors and say (in so many words), “It’s not that society is sexist, it’s that women are bad employees.” It makes more sense to interpret these as mediating factors. E.g. women working fewer hours and pursuing jobs with more flexible hours and lower wages, can be interpreted as the mechanism by which sexist norms which demand more domestic labor from women effect their economic wellbeing. Again there’s an analogy with the racist criminal justice models - you can easily look at the models, see the role that, say, zip code plays, and say, “the zip code you live in plays an important role in mediating the effects of racism.” This could then place greater urgency on efforts for desegregation, environmental justice, school funding reform, police patrol pattern reforms, etc. That’s just not how it is being implemented.

Now, there are some weird issues with model-family specific biases (the biggest one is local minima, but there’s also weirder issues like ways you can get a model to believe in magic because NN’s don’t understand causality). But the biggest issue really isn’t located anywhere inside of the model fitting process. It’s before and after.

I think this is really important to understand for two main reasons:

  • In the near term, there are two large oppressive totalitarian states (Russia and China) that we know for a fact are using AI purposefully and explicitly for socially destructive purposes

  • In the far term, while the “paperclip apocalypse” (where a rogue AI kills us all by taking a boring objective like maximizing factory production too far) probably uses an unrealistically extreme example, if you scale up the problem of this AI misimplementation you can start to see a lot more opportunities where being careless about the questions you ask AI and how you act on those answers can do tremendous harm

1 Like

Well my comment on your blog post said it was pending approval so i guess until you approve it I can’t link to it.

But here is a link to my plug for your FOBI on Awakening To Reality fb group
https://www.facebook.com/groups/AwakeningToReality/permalink/6568384679869573/?app=fbl

They also have a website at awakeningtoreality.com

Which blog is it? Don’t need to link your comment. Just link the blog.

I retraced the links and it seems the only link you might see as a blog post is this.

This is not my blog.

That’s why I got confused.

Perhaps I misattributed to you.
https://physicsandbuddhism.blogspot.com/2018/07/future-of-buddhism-institution.html?m=1