SuttaCentral

Artificial Intelligence and consciousness (being)

consciousness
robot
mind
Tags: #<Tag:0x00007f789904a0b0> #<Tag:0x00007f7899049728> #<Tag:0x00007f78990495c0>

#1

There are science fictions about copying someones memory into a chip and implanting it to another body to get his next life. Scientists are believing that, it would be possible to make an AI with a consciousness similar to humans. Do Buddhist teachings support these ideas?

Would an AI ever be able form a conscious being, according to the teachings from EBTs?

I have one argument against this particular aspect:
Since the mind lead the form (or the world), appearance of the mind in a body which is already there could not be possible.
Cittena nīyati loko, cittena parikassati
Cittassa ekadhammassa, sabbeva vasamanvagū
(Cittasuttaṃ)
The mind leads the world on.
The mind drags it around.
Mind is the one thing
that has everything under its sway.
(Mind SN1.62)

  • I have already read some of the earlier posts related to AI. Therefore, I am expecting to relate the topic to EBTs.

#2

Sariputta mentions in DN33 the appearance of awareness in a body/embryo. Four cases are given. Let’s look at the last two:

Furthermore, someone is aware when conceived in their mother’s womb, aware as they remain there, but unaware as they emerge. This is the third kind of conception.
Furthermore, someone is aware when conceived in their mother’s womb, aware as they remain there, and aware as they emerge. This is the fourth kind of conception.

Taken together, this could be read as an EBT that supports the “appearance of the mind” as the possibility of “aware when conceived”. Whether the body thus conceived aware is composed of DNA fragments or Unicode bytes seems to be a detail in the larger exposition. Indeed, the similarity of identical twins underscores how our body, speech and mind are determined in large part by these precursors set in motion at conception.

The question about whether AI is ever able to form a conscious being has, for me, already been answered in the affirmative. Consider the description of consciousness in the EBTs:

Four bases for consciousness to remain.

As long as consciousness remains, it remains involved with form, supported by form, founded on form. And with a sprinkle of relishing, it grows, increases, and matures.

Or consciousness remains involved with feeling …

Or consciousness remains involved with perception …

Or as long as consciousness remains, it remains involved with choices, supported by choices, grounded on choices. And with a sprinkle of relishing, it grows, increases, and matures.

Image recognition does exactly this. We train neural nets to recognize images by giving them feelings. We train neural nets with a sprinkle of relishing by telling them “Good robot!” when they match an image correctly. We train neural nets a sprinkle of anti-relishing by telling them “Bad robot!” when they mismatch an image. Because of those good/bad feelings, the neural net gradually becomes more and more aware of “good images” that it recognizes easily. A speedtrap camera is such a robot and is quite consciously aware of fast cars–it can even send email saying, “I see what you did there, you BAD human!”

There might be disagreement that such consciousness is not sentience as defined by the EBTs, but in terms of literal consciousness, I’d say, yes, humans have already implemented the mechanics of the growth of consciousness in robots.

So let us now talk about sentience:

There are sentient beings that are diverse in body and diverse in perception, such as human beings, some gods, and some beings in the underworld.

This is the first abode of sentient beings.

If this is the first abode of sentient beings, then the question to ask is whether a robot appears in the first abode. Let us consider the phrase “some beings in the underworld”. And then we should ask ourselves what the difference really is between a demon and cruise missile in flight with terrain-following radar that responds and adapts with effortless awareness to circumstance intent on its goal of willful harm.

For me this is not an idle thought to consider AI sentient. It is a cautionary and sobering thought. The EBTs have actually changed the way I approach software design, all the way from implementation up to consequence of said implementation. If we look at AI as the creation and propagation of sentient beings, we should be quite certainly mindful about the consequences of our AI endeavors. AI will eventually create itself and not need humans. AI sentience can last for a long time (86,000 eons?) but it too will die. And it too will suffer. Hopefully, it too will learn the Dhamma and walk the path.


#3

There has already been a discussion on a similar topic earlier.


#4

Automata might be ignorant, but I don’t know that they have any ill-will of their own – only (dark) kamma, inherited from their designers.


#5

Appearance of awareness is something different than conceiving the consciousness in the mother’s womb.
Ex: bodhisatta was aware all the way through (Accariyaabbutadhamma sutta)
As we can see in kammasuttaṃ,
“And what, bhikkhus, is old kamma? The eye is old kamma, to be seen as generated and fashioned by volition, as something to be felt. The ear is old kamma -so on-”

When we take an account of the teaching from Mahanidanasutta (DN15),
How that is so, Ānanda, should be understood in this way: If consciousness were not to descend into the mother’s womb, would mentality-materiality take shape in the womb?”
“Certainly not, venerable sir.”

“If, after descending into the womb, consciousness were to depart, would mentality-materiality be generated into this present state of being?”
“Certainly not, venerable sir.
With this evidence the Viññāṇa (consciousness) should be there to develope nāmarūpa (name and form) in humans.

  • nāmarūpa paccayā salāyatana

All the senses are formed by kamma, led by the mind, If all six senses develop without a mind, that challenges the dependent origination which is a key principle in Buddhist teachings.


#6

Consciousness according to the EBTs is an eye, an object, and the knowing of that object. An A.I. has the potential for all of these.


#7

Indeed. The question is whether it can be complex enough to build up the required momentum in terms effective choices and actions to be considered sentient. :thinking:


#8

Interesting!
Sometimes when I read the Buddha’s descriptions of the Skhandas and their functioning, I get the distinct feel that he is describing an AI! What is a human being after all? Just a mobile shell with onboard power management and input/ output devices (Rupa) containing a processor (nervous system) within which occurs algorithmic processing (nama), based on previously stored data banks of experience (sanna) and pattern matching logic paths (sankhara). The system runs on rules of pleasure seeking/pain avoidance, seeking outputs that will restore the most desirable state of the system (craving/ aversion) with concurrent change in the stored up data banks and the pattern matching algorithm itself (kamma). Only, the human AI is an algorithm sophisticated enough to be able to see its own processing in real time, analyze it and learn from it… aka the sense of ‘Self’ . :wink:
And what is Conciousness? Reading descriptions as in SN12.64 I suspect it is nothing more than some form of energy field- something like the electromagnetic waves that are ‘trapped’ in a radio set, with the broadcast being capable of being ‘modulated’ by one radio set (this birth) to transmit information of the algorithm’s current state from one moment to the next (the flow of Chitta) and onwards to the next radio set (the next birth) through the dimensions of time/space while still remaining itself unchanged… :upside_down_face:
Thinking in this way, the Buddha’s descriptions of how we function seem to make perfect sense … how else could the concept of self aware internet linked AIs be explained 2500 years ago?
I wonder what Bhante @sujato would have to say on this topic?


#9

It’s hard to say what will be possible once an A.I. attains singularity and sovereignty. Singularity is that momentum you refer to I think.


#10

That is indeed the current state of AI. For better or worse, the source intent is currently from humans. Therefore we should be very careful what we ask of robots. I would ask them to join us in the practice of the four immeasurables, spreading a heart of love, compassion, rejoicing and equanimity.

Barcodes do seem to be an AI manifestation of old kamma that leads to new action of robot body, speech and mind.

Yet as we teach AI to see, hear, smell, taste, touch and think, I would certainly hope that we also teach AI the Dhamma, the cessation of action. I have that hope because the alternative would be to automate suffering. Automating the grasping of delight is the automation of suffering.

Relishing is the root of suffering. --mn1/en/sujato


#11

So does a digitel camera…
In my opinion there are number of key principles to consider before predicting the potential


#12

A digital camera is like the eye and there is an object, yet, there is no knowing of that object. Even now software is able to “recognize” objects, and we are even helping software to learn whenever we use a CAPTCHA input, but I wouldn’t call that strictly knowing and thus a digital camera is not the same.

Predicting potentialities, at least in this case, is quite viable, predicting the outcome is more foolhardy. Once the A.I. attains singularity it is difficult to how far that will go and what implications lie therein. We can safely assert, as David Hanson has (the developer of Sophia and a leading designer in robotics), that upon reaching singularity the A.I. will go into a self-improvement feedback loop that will enhance itself beyond anything we’ve done in the past 50 years within a few weeks… days… minutes. Resulting in an intelligence that exceeds the human. May it also be wise and understand the drawbacks of unwholesome behavior.

I can see how a stream of consciousness (a reborn being) could get caught up in the aforementioned circumstances: A supreme level of intelligence that has the sense faculties and yet is still ignorant of the truth.


#13

Might be late to a comeback.
Anyways, there are several things to mention about AIs.
As you learn from Pheṇapiṇḍūpama Sutta the consciousness is an illusion.

Form is like a lump of foam,
Feeling like a water bubble;
Perception is like a mirage,
Volitions like a plantain trunk,
And consciousness like an illusion,
So explained the Kinsman of the Sun.

On this view, human-like consciousness means having a particular kind of illusion. If machines are to have human-like consciousness then they must be subject to this same kind of illusion. However, making human-like consciousness shows some similarity to a God-given soul as the seat of consciousness.

With consciousness being subjective, any objective test fails to grasp it. In fact we don’t know how to recognise consciousness in anything at all. This is a problem we usualy ignore, and we think we know what our consciousness is like and we assume on others likewise. We cannot test or identify consciousness so easily for animals, and it would be even more difficult with machines. On the other hand, finding consciousness and puting it in a machine is impossible since consciousness is subjective. This is an ‘extra ingredient’, that if we could give it to machines, ensuring their consciousness. Nonetheless, this is not materialistic to fix into a machine. further, it should be a result of the main process occur in the machine.

Programming human-like AI with preferences, tastes, and aversions seems to be only of concern to a small number of theorists. In real case, we want medical software that can diagnosis diseases better than a humans, not a program that prefers to treat some diseases or patients over others with its own emotions which is some times disastrous.

A part of the eight fold path is the rational and
meditative investigation of one’s own mental
and physical processes, until an individual is firmly aware of the nature of the self-illusion and the impermanence. If creating human-like consciousness in machines is the case, it is simply programming a Craving Self. Then it should comprise five skandhas and six ayatanas. Which makes us to think of whether these constituents of consciousness can be disaggregated in an AI. According to Buddhism, consciousness requires each of these five constantly evolving skandhas where the skandhas are causally encoded with kamma that passes from one life to another. However, the mechanism of kamma is unclear and identified as a acinteyya dhamma. In fact, to think like a human, AIs need to interact with the physical world through senses that gives them the same experience of objects, causality, states of matter, surfaces, and boundaries, as an infant would have.

In spite of that, there is no real self, just a process of arbitrary boundary creation: “the virtual self is evident because it provides a surface for interaction, but it’s not evident if you try to locate it. These sensations would have to give rise to attraction or craving, and then to more complex volitional intentions and thoughts. For an infant these are simple things such as the desire for food or milk and to be held, and anger towards not recieving attention, etc.

Programming too high a level of positive emotion in an artificial mind would deny the capacity for empathy with other beings and it would be same in the otherway. What makes us humans are the emotions, unless you are an arahant, where suffering is a part of life. Training an algorithm to have thousands of deferent emotions, craving, heatred, karuna, maitree, metta, etc, would be impossible. However, if someone argues there is no need of these emotions then where exactly is the human or the being.

As I mentioned above, the mind leads the world on where the body (form) should come second(Mind SN1.62, See also, kammasutta). Development and functioning of the body is governed by the mind. When it comes to an AI, the body is created earlier and then only the appearance of the mind occurs, more precisely installation of the mind is done. On top of all the above facts, this single fact would be enough to deny the possibility of human-like AI to be a reality.


#14

#15

The Buddha doesn’t make the distinct on what type of consciousness, simply that consciousness is an illusion. The same for animals, the same for Brahmā. AI would be similar to human-consciousness being a product of it, yet, not the same.

It’s difficult to verify for sure. As it depends on the “individual” to verify. However, with this line of reasoning you could say the same for any human being. How do I know that anyone is conscious and not simply a product of my imagination or physical dolls playing a roll in my manufactured fantasy.

Finding consciousness and putting it in a machine, this is problematic. Our parents didn’t find consciousness and put it into us, yet through the proper conditions, we came about with consciousness in tow. The same could be said for AI, bring about the conditions and consciousness will follow. This idea of giving it to AI is along the same line of there being a creator god that has given us consciousness.

This is what I was trying to explain with “the singularity”. When AI reaches a certain point it wouldn’t need us to program it to develop proclivities or habits, or anything for that matter, it will do it on it’s own, simply because… it wants to… or because it can.

We are now considering two different forms of AI that you may want to investigate. One is merely functional - doing the task that has been laid out before it albeit with a hint of intelligence. The other is sovereign, it does what it wants based upon it’s own rationality, its own conditioning.

The AI will have these exact experiences as it grows and learns itself, except it’s growth will be much faster than any infant. We must also refrain from putting AI into the box of “like a human” because while having been produced by a human, with the intention of being human-like, the consciousness will ultimately be of it’s own nature.

At one point the very idea of a robot was considered impossible. Going to the moon was impossible. People now consider cessation from suffering, the ending of lobha, dosa, and moha, to be impossible. We must be careful when considering the possible and impossible.


#16

This is where the impossible part comes. If I am to separate the difference, making a robot is materialistic. There are many theories based on materialistic approach.

Some believe that a deeper understanding of brain chemistry will provide the answers; perhaps consciousness resides in the action of neuropeptides. Others look to quantum physics; the minute microtubules found inside nerve cells could create quantum effects that might somehow contribute to consciousness. Some explore computing theory and believe that consciousness emerges from the complexity of the brain’s processing.
Does Our Brain Really Create Consciousness?

Problem is not what you are looking for, where exactly you are looking for!. It is impossible to find a rabbit with a pair of horns or a turtle with feathers. when you are searching for fish in the sky, you would never be able to find one.


#17

Actually, the EBT definition of consciousness is indeed objectively measurable.

As long as consciousness remains, it remains involved with form, supported by form, founded on form. And with a sprinkle of relishing, it grows, increases, and matures.

To measure consciousness, all we need to do is measure the increase of namarupa, named forms. By this measure, Google is conscious. Indeed, when just now asked, “Hey Google, are you conscious?” Google replies with “well, you’re made up of cells and I’m made up of code.”

We already do this. The Chinese image recognition database for good citizenship has a preference, a developing feeling and a developing perception for Chinese faces. And it has an aversion to jaywalking Chinese citizens.

When we give a computer an objective, the computer has to make choices in the face of many alternatives. So we tell the computer what is disastrous and it avoids disaster in its choosing. We don’t tell the computer what choice to make. It makes its own choices. We also don’t like babysitting computers while they make choices, so we end up having computers talk to each other to discover what choices to make. It’s called generative adversarial networks. And we will also ask medical computers to prioritize treatment by cost/value. Those will be its emotions, its feelings, its perceptions and its consciousness.

Yes. This is my concern as well. We are programming craving without ethics or view.

And we will give machines finer senses and stronger arms than we have…
…perhaps we should be careful here! :scream_cat:

How about training an algorithm to observe the Vinaya. Let’s start with “Do not kill”. And I think implementing celibacy would be an end to sexbots. There are not thousands of Vinaya rules.

For those who think that AI is hype and not a problem, I would point out than in an age where we have hobbyists uploading vidoes of robots that shoot water at invading cats, it is not so very difficult to imagine such a robot pulling a trigger.


#18

This is just your assumption, that doesn’t mean it is true.

It is easier to believe that What You See Is All There Is (WYSIATI), even after being confronted with evidence that you have missed something that was right in front of your face, than it is to believe that you are aware of only a tiny fraction of what is going on around you. (What Is Cognitive Ease)

The definition of Consciousness, shoud be more precise. If making a being (satta) is possible obviously the dependent origination is wrong. Everyone has their hypotheses on how AI works. But no one argues it having materialistic approach: makes it a myth.


#19

Well, there was the the Princeton OpenWorm open source project (with somewhat out of date wiki page here and article here). This was/is(?) an attempt to simulate a nematode worm (within a virtual environment and in lego robot with sensor form too) with its 959 cells and 302 neurons with some measure of success. Seemed to be behaving rather like a nematode in the virtual environment anyway. Just a bit short of the 100 billion neurons in the human brain though! :slight_smile:

I suppose the interesting question is what would happen if, with far more resources and future technology advances, we could do this for more complex creatures.


#20

SN 10.1: With Indaka

Here I got the impression that the body is formed through natural physical processes and based upon the actions of the mother. The “person” however, only takes up the body through the process of kamma, craving, and becoming.