Buddhism & Robots: a rarely intersecting intersection

So much seems to hinge on the definitions of words that in practice carry a diversity of meanings.

The following conclusion I think is consistent with @sugato’s analysis:

  • A statement such as: "it is a fact that neural nets produce racist and sexist outcomes"
    can be understood as coming from an analysis that is driven 100% by desire, and it always reflects the minds of the people behind it.

Which raises a question in my mind of how far it is useful and productive to take such analysis.


I think it is useful to consider how we might to test for validity, accuracy or skill. (The term “skill” in this case being a term of art, for instance, for how well a hurricane model predicts the path and strength of a storm.)

Consider the case of this article: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingThere’s software used across the country to predict future criminals. And it’s biased against blacks.

Propublica.org’s claim that this risk assessment model was biased is based on what many would call ’ fairly objective’ measures of accuracy. But that analysis is arguably also “driven 100% by desire”. And so on and so on.

I would instead emphasis that it is unknown whether the the risk assessment model in question was validated, how well, where did the data come from etc. In other words more like the analysis published by propublica.org. In other words I would like to see more transparency in the process. The value of a practice of openness and transparency IMO is a more useful and important lesson to be learned from this.

One also hopes that the people using the software took into account that the prediction was only accurate about %60 of the time.

I would even suggest that openness and transparency are consistent with – if not suggested or implied by – Right Speech, Right View, and Right Effort.

I agree with venerable bhante Sujato, that the “I” in “AI” is quite an illusion. :slight_smile: It’s all programmed to perform certain tasks or confrom to certain acceptance criteria. It’s a tool that imitates intelligence. I even have very strong doubts on the validity of the Turing’s test. If something looks like intelligent, it does not mean at all that it is intelligent. It only means (sorry for repetition) that it looks like intelligent to a certain operator.
From the Buddha’s teaching we know that nama and rupa come together, they both influence one another and they depend on each other. This way it’s the combination of human nama nad human rupa what makes a human. There cannot be a human nama in non-human rupa. Even if somehow humans can create an artificial intelligence, self-aware one, it will probably be something pretty different from the human one. And it should fit to one of the lokas, because a human can’t create a new loka, can he? So, to which will it fit? I don’t see any. So, either humans create “artificial humans” (which seems pointless, because there are humans already), or it won’t be really intelligent. But surely it may look like one, from certain distance.

1 Like

I am not sure if I should create a new thread or post this reply here. It seems that the original post was referencing the destruction of mankind, and that is what I’d like to bring this back around to.

The World Economic Forum has been very public about their transhumanist agenda, and the UK military has the forced transhumanist vaccine agenda all spelled out here on page 13 (page 15 in the pdf reader):

So, I don’t see this as some kind of fictional or far-off threat. I see it as happening right now; we are in the process of merging human and machine. I think organic humans will be gone pretty soon.

Now, this is a little bit different from talking about pure robots, because cyborgs are a mixture of human and robot. So, as a cyborg, my mind might be infinitely more intelligent, and also infinitely more controlled by the Cloud, but I could also potentially still have human feelings and maybe still some sense of an individual identity. It might even feel like being in a mental prison–depending on how things are programmed I guess.

However, the real issue to me is how this would affect the ability we currently enjoy to meditate and escape rebirth. The relative immortality offered by the Singularity (AI Cloud) would already tend to discourage any kind of religious practice, but even beside that, we may not have the choice anymore how we spend our time. Microsoft even patented a cryptocurrency to reward/punish people based on thoughts and actions.

So, all this to say that our current ability to meditate is something that should be treasured and used wisely since it might not last long.

I am a little bit saddened that Buddha didn’t seem to predict this or talk about it at all, so it seems like we have no guidance on how to think or act with regards to it. It’s also really strange how rare it is to find anyone thinking about this the way that I am, especially in Buddhist circles–it’s more likely to find Christians thinking like this.

My only hope for humanity is that somehow this Singularity will backfire. Perhaps some advanced meditators will dismantle it from the inside, or something like that. Maybe it will turn out well, though right now it doesn’t seem likely. Elon Musk agrees with me, btw, and he’s at the forefront of developing this brave new world.

Sounds like a job for Future of Buddhism Institute (FOBI) for short.

Could be that the vinaya rule for only humans can ordain is part of the foresight of the Buddha? It can certainly counter some of the worries of what if cyborgs cannot meditate. Going into the future, depending on ideological stance, there would be a group of pure human Buddhists who would reproduce the human way in the hopes that their children can get to become monks and nuns. They would severely lack behind other religions or the atheists would mostly go and become posthumans. Likely some lay and monastic Buddhists would also go posthumans just for the longevity for more time to practise and teach.

An interesting discussion topic then for the Sangha to decide for Vinaya rules application is: For monastics who transform their body into cyborgs or put chips into their brain, or replaced brain cells one by one with electronics, becoming robots, are they still considered monastics in this life? This might very well happen for comatose patients.

Given the sutta DN 27

There comes a time when, Vāseṭṭha, after a very long period has passed, this cosmos contracts. As the cosmos contracts, sentient beings are mostly headed for the realm of streaming radiance. There they are mind-made, feeding on rapture, self-luminous, moving through the sky, steadily glorious, and they remain like that for a very long time.

We might wonder, how would most beings get Jhanas so easily? Might it be that the singularity helps the minds of sentient beings to attain to Jhanas easily? Could it be that the end points of universe cycles is to create singularities, for humans to merge with machines and thus their minds are easier to attain to Jhanas?

For one thing, without the physical body, there are no hormones or gut bacteria to create lust. One of the hindrances down. Many other hindrances would similarly go out. Without the need for biological food, and having constant electricity, there’s little chance of being sleepy, weak.

So the AI hivemind may be a good spiritual gain for the world yet.

There was this JBE article on the subject:

Interesting, glad to see someone else thinking about this.

I commented on your blog post. I will also share it with the Awakening To Reality fb group, unless you already have?

Interesting about DN 27. I hadn’t considered that to be related to the singularity but as I mentioned in my comment on your post, it is possible that some advanced meditators may have some beneficial influence once they are assimilated into the hive mind.

Either way, seems like we should assume the worst and avoid interaction with the transhumanist&vaccine agenda as much as possible while meditating like our heads are on fire.

Where? Can link it? I dunno the awakening to reality FB group, can also link it?

(only realized this was a years old post after I’d already written this all up)

Bhante, I think you may be slightly misplacing the blame. The issue with those models isn’t any innate bias in the model family, or even really an innate problem with the underlying data. It’s questions wrongly asked and answers misinterpreted. Those models could just as easily be considered “models of racism” instead of “racist models.”

The specific models are a bit of a black box, but perhaps I can make my point clearer by switching to a different issue (sexism) and a different model family (linear regression).

Roughly, if you fit a regression model to wage data in the US and control for no other factors but sex, you get the figure that being female instead of male reduces your wages by 23%. If you control for most other directly measurable plausible factors, you get a figure that being female reduces your wages by 7%.

It would be super, illegally, laughably sexist to then use this model in an HR context to calculate what you should offer a woman for her wages. But that is essentially analogous to how these criminal justice models are implemented (with the complication that race often only exists as partially represented in a hidden layer of most of these NN models).

It is pretty extremely sexist when people look at the 16% point difference attributable to other factors and say (in so many words), “It’s not that society is sexist, it’s that women are bad employees.” It makes more sense to interpret these as mediating factors. E.g. women working fewer hours and pursuing jobs with more flexible hours and lower wages, can be interpreted as the mechanism by which sexist norms which demand more domestic labor from women effect their economic wellbeing. Again there’s an analogy with the racist criminal justice models - you can easily look at the models, see the role that, say, zip code plays, and say, “the zip code you live in plays an important role in mediating the effects of racism.” This could then place greater urgency on efforts for desegregation, environmental justice, school funding reform, police patrol pattern reforms, etc. That’s just not how it is being implemented.

Now, there are some weird issues with model-family specific biases (the biggest one is local minima, but there’s also weirder issues like ways you can get a model to believe in magic because NN’s don’t understand causality). But the biggest issue really isn’t located anywhere inside of the model fitting process. It’s before and after.

I think this is really important to understand for two main reasons:

  • In the near term, there are two large oppressive totalitarian states (Russia and China) that we know for a fact are using AI purposefully and explicitly for socially destructive purposes

  • In the far term, while the “paperclip apocalypse” (where a rogue AI kills us all by taking a boring objective like maximizing factory production too far) probably uses an unrealistically extreme example, if you scale up the problem of this AI misimplementation you can start to see a lot more opportunities where being careless about the questions you ask AI and how you act on those answers can do tremendous harm

1 Like

Well my comment on your blog post said it was pending approval so i guess until you approve it I can’t link to it.

But here is a link to my plug for your FOBI on Awakening To Reality fb group
https://www.facebook.com/groups/AwakeningToReality/permalink/6568384679869573/?app=fbl

They also have a website at awakeningtoreality.com

Which blog is it? Don’t need to link your comment. Just link the blog.

I retraced the links and it seems the only link you might see as a blog post is this.

This is not my blog.

That’s why I got confused.

Perhaps I misattributed to you.
https://physicsandbuddhism.blogspot.com/2018/07/future-of-buddhism-institution.html?m=1