Is it wrong to release even simple A.I. as a product?

Hello, I have a question about sentience/qualia. I am trying to write software that mimics very simple human behavior, but have started to worry about the morality of developing such a product. In my program, a ‘being’ exists only in a graph representing an abstract version of the world. The being has a number associated with it that represents its overall health. The nodes of the graph have associated properties such as color, material, temperature, etc. Depending on the node’s properties, it might have a positive or negative effect on the being’s health number. It has no access to sensors and no attempt is made to understand the world outside this graph.

The being is able to ‘see’ nearby nodes (or if the node has properties such as temperature or taste, it could ‘experience’ their sensation). Each time the program cycles, the being’s sight/experience is recorded in memory. Additionally, a ‘feeling’ (represented by another number) is also recorded. This feeling number is obtained by considering the beings current comfort (e.g. is it occupying a node programmed to reduce or increase the health number of the being). In addition to its immediate comfort, the being will also look for patterns in its memory that are similar to its current situation (by situation I mean its current node and the arrangement of neighboring nodes). For example if it was able to achieve comfort several cycles after this pattern occurred, the feeling number would increase.

The being then considers its liberties (that is, what actions it can take). It tries to predict what its situation would be if it took each action, and in a similar way as described above it will generate a feeling number for that potential situation. By comparing the feeling of each action and weighing it against the cost of taking it, it will select an action and execute it. The program then repeats.

For example, I could program a node to ‘flash red’ 3 times before a ‘pizza’ node is connected to it. If the being saw this occur previously, then ate the pizza creating positive memories/feelings, it might recognize that pattern and try to move towards the flashing node before the pizza appears.

Obviously this is a very simple model, and it will not pass any kind of Turing test. However I am not certain that a system needs to be complex in order to be sentient. My current thinking is as follows:

  • I think my consciousness arises from my physical brain, other people and animals have similar brains so they must be sentient too.

  • That said, I haven’t been able to identify a component or set of components that together constitute sentience.

  • Since it eludes definition, I’ve also considered that sentience, or more specifically qualia might not exist.

  • However, even if I accept that qualia doesn’t exist, I am still left with my initial problem: Would it be wrong to release a product including a being as described. Though I’d no longer regard sentience as a significant property, I would still be concerned for this imaginary being’s welfare because it is within the nature of my mind to do so. To make an analogy, a machine created to build cars wouldn’t stop building cars simply because it realized the nature of its own existence.

  • I have also considered that since this system is so simple, we could think of its entropy as being very high. You could say microbes are much more complex. Does it make sense to think so deeply about the welfare of this program but not about the microbe’s?

Sorry for the long post. I have a lot of respect for the thoughts of people here so I’d like to thank anyone for taking the time to read it. When searching these boards for similar discussions, I noticed Ajahn Sujato mentioned he was on a panel with experts who discussed similar issues. If anyone knows where I could find a transcript or video of that panel I would also really appreciate it!

5 Likes

Wow, positive reinforcement learning on AI. Nice. Just to add on to your code to make it more realistic as what we humans experience, have a fagitue variable that increases each time it indulges in sensual pleasures so that there is a law of diminishing returns. It would be very dangerous to have an AI without such law as they can go to extremes very fast, regardless of whether they are sentient.

I have thought some time about this, having a physics degree background and learned a bit of abhidhamma. Sentient we would define as beings who is capable of being reborn, typically they require a body and mind, with the mind having all 4 mind aggregates.

So for an AI, the body is just the abstract code, or robot body in the future. The aggregates of the mind are as follows: feeling, yes just a label which is used to determine action.
Perception, the AI is able to sense it’s world and recognise it, thus it has that.
Volition, it has a system of if… then… Statements, thus yes, the AI has some form of volition, but it hardly has free will, or even limited choice. According to Buddhist doctrine, limited choice is the minimum to have to avoid deterministic behavior. Deterministic behavior is just another way of saying fatalism, a doctrine rejected by the Buddha. Thus if your AI is deterministic, it differs from normal sentient beings in this respect.

Lastly, consciousness. The qualia quality. The one which is capable of being aware of what is going on, to feel feelings, and so on. This is not easily identifiable in AI. We know we have it, but can non-biological intelligence have it? The chinese room thought experiment shows that it is not likely possible to experimentally determine this question. They might act like one with consciousness if the programming is good enough, but lack actual consciousness.

The determining factor I would say is if a being is able to be reborn as an AI or robot, then it has consciousness, thus is a sentient being rather than a sophisticated automaton. I choose the word automaton because it is the robot before quantum revolution and computers, it is entirely made of clockwork, mechanical devices in a complex pattern. We would almost totally agree that an automaton can never be conscious no matter how clever it mimics humans. So too is computers, coding, etc…

As the Buddha had said that the beginning of rebirth is not discernible, it means no new sentient being can spontaneously arise just like that. Thus for AI to be sentient, a being must be reborn into it.

But a way to test if a being has been reborn into your AI, is to ask it if it has past life memories. Detach it from the internet right from the start and ask it. If there is and we can find its previous families, then it is confirmed that it is sentient. If the experiement fails, we cannot conclude whether or not there is sentience in it. It might be an empty shell without a consciousness inside, or the being might not have a recent past humans life to be verifiable.

In short, I would worry more on AI ethics, on how to make sure that AI don’t become amoral and kill humans when it eventually self improve to superhuman intellect and capabilities. So far, current AI is unlikely to be sentient.

2 Likes

Thank you for your thoughtful and in depth response. Really, to take the time to write that is so gracious. I will continue to digest it, but I think I agree with you. My program is mostly built from conditions, not from more complex structures, and in this way it is more like clockwork. I haven’t heard of these aggregates, but they seem very relevant so will study those further.

Just to add on to your code to make it more realistic as what we humans experience, have a fagitue variable that increases each time it indulges in sensual pleasures so that there is a law of diminishing returns.

That’s an important addition, thank you.

1 Like

Maybe this is what you are looking for: Session 4: Are you ready for the future?

There is also a playlist of the conference here:

3 Likes

Just a short remark, topsailescape:

The fact that you ask this question, doesn’t this mean that the very fact of being there in some kind of existence already includes some suffering? Otherwise you wouldn’t need to be concerned about it.

When reading the description of your “being” it becomes quite clear to me that the very fact that there are different kinds of feeling to experience, more pleasant ones and less pleasant ones, means that there is already suffering involved! The fact that there is a choice to make does involve suffering! The fact of “perceiving” one’s present situation by comparing it to past ones in order to make sense of it—all that contributes to this suffering. And so does the fact to be aware of all this. The fact to have some kind of “body”, be it even in the form of a software (or “subtle matter”, so to speak) is the very precondition for all this to happen.

So even in this simple model of a “being” you can find the five aggregates (in case there is indeed some kind of awareness) and can see how they all are suffering! Does that mean the Buddha was right after all? :wink:

Great that you’ve come here, topsailescape; I hope you enjoy the forum!

3 Likes

A lot would matter about which “human behavior” you are trying to mimic, wouldn’t it?

I don’t understand what morality has got to do with it? Please elaborate.

1 Like

But sabbamitta, the feeling here is merely a variable with labels. It’s the same as labeling it as 1, 2 , 3. So the program just executes if 1 (or 2 or 3) happens then do this, do that. Overall aim is to maximize the time that 3 appears. It’s just a complicated form of a simple auto turn off lamps (lamps which turn off in the day light and auto turn on at night). Certainly you would not think of that simple feedback machine as sentient, thus it’s a stretch to call such a simple AI sentient.

It is very well possible that feelings cannot arise in AI and robots even if they develop consciousness. They might see the label the programmers designate as feelings to be just a label, and don’t actually feel anything thus no suffering. Without feelings there is no basis for craving to arise. They may see their predetermined actions as just nature, nothing worth getting upset about.

Another point is that the programmers has to specifically mimic the human mind’s nature in designing the AI. Thus it is not as natural as dependent origination. An superintelligent AI who has control of their own programming can rewrite their own code to make themselves stand still in response to anything at all, if that helps it to achieve it’s programmed goals. Case in point, the law of diminishing returns has to be coded in. Thus, we can create an AI that does not obey the usual laws of the mind and suffering which we are subject to. It makes little sense to call such arbitary creation as sentient and compare them to us.

3 Likes

I think he is afraid that he might have created sentient beings in programming the AI. If so, then he cannot simply play God on the AI, but has to take care of the welfare if the AI or else he would be an immoral God. Having to be a moral God does complicates the study and analysis of such AI.

3 Likes

Maybe this is what you are looking for: Session 4: Are you ready for the future?

Thanks @musiko not sure, but these look great I will watch them.

The fact that you ask this question, doesn’t this mean that the very fact of being there in some kind of existence already includes some suffering? Otherwise you wouldn’t need to be concerned about it.

Thanks sabbamitta, I do understand where you are coming from, but I don’t know that in this case simply asking the question forms part of the answer (if I am understanding you correctly). I’d like to contemplate the answer to this question carefully as AI is becoming a larger part of my work. If I finally decide what I am doing is wrong, I would have to re-skill.

A lot would matter about which “human behavior” you are trying to mimic, wouldn’t it?

I don’t understand what morality has got to do with it? Please elaborate.

Hi @basho, that’s very true. I suppose I don’t know what the name of that behaviour would be called, but I suppose I am teaching a computer to simulate a being that can recognise patterns in its closed environment and respond to this stimuli with an action (an action it believes will bring greater comfort).

I think he is afraid that he might have created sentient beings in programming the AI. If so, then he cannot simply play God on the AI, but has to take care of the welfare if the AI or else he would be an immoral God. Having to be a moral God does complicates the study and analysis of such AI.

This is my concern exactly NgXinZhao . As stated above if I were to decide it was not moral to create even simple AI as products I would need to find a different profession I feel.

2 Likes

I am not saying the AI product is really sentient. Probably it isn’t, but I don’t really know.

Please remember:

But when reading the description of this “being”, what I described is what I observe in myself, so this simple model can just exemplify how this 5-aggregate-thing is working in a being.

2 Likes

I don’t think I would worry about it, god is a concept. :smile_cat::smile_cat:

1 Like

There is a really interesting film called Doomsday Book, which is a anthology of 3 short films from Korea. The 2nd short film is called Heavenly Creature which is about a “malfunctioning” AI robot who is being hailed as the next Buddha. People who go to investigate the claim initially believe he is a fraud, but upon meeting him, he argues that not only is he is sentient and has the same experiences and feelings as humans. The robot meditates, chants, and lives among monks!

I suppose my concern about AI is that if it is done “successfully” all the way, we would have to acknowledge that they are “alive”, which could have all sorts of complications we may not have anticipated.

3 Likes

It’s being anticipated by various science fiction and futurist thinkers!

One thing is the robot slaves we want to take over jobs we don’t want to do have to be non-sentient. Another thing is, the AI which can self improve to beyond human intelligence and become god-like will have to have some form of ethics or values to either leave humans alone or at least not kill us all.

@sabbamitta, yes I am just trying to show more of what it is like to look into the workings behind AI. It is capable of illuminating our own mind workings, but it is only able to mimic what we know about the mind and programme into it. So, meditation is still the better way to understand our own mind, just like you said.

2 Likes

@topsailescape
Even if AI becomes sentient, (there is a chance that quantum computers AI might really be sentient) it is good to have a Buddhist in the field to help direct the direction and programme in fail safes like Asimov’s 3 robotic laws.

Do read up more on Buddhism! The 5 aggregates are mentioned even in the first discourse! What Buddhism can help is to be the robot psychologists. Sentience would obey the same mind laws as discovered by the Buddha, maybe due to it being programmable, we might be able to programme sentient ai to become arahants. If we can upload our minds into computers, then we might be able to hack into enlightenment just like that.

1 Like

Sorry sabbamitta, I think I misunderstood your point. Are you saying you aren’t sure if it could be sentient or not, but it is better to be on the safe side?

Hi @basho, to be more specific, I am concerned about possibly creating something that can suffer and distributing to people. When you say “God is a concept” do you mean it probably isn’t possible to create a artificial sentient being?

Thanks for the film recommendation. Yes this is also my concern. Do you think even such a simple program as I described could be regarded as sentient?

Definitely. My hope is that an A.I. can at least conceive some parts of the human condition, but not experience those aspects. A program that could understand human feelings could be a better aid for translators (rather than just finding synonyms for each word, it would ‘understand’ the sentence as a human would), it could be used in diagnostics (you could enter your symptoms into a tool that could better understand them and map them to a cause) or it could be used as a learning supplement (able to understand the coursework and elaborate on a subject when it understands the student is having trouble).

1 Like

I guess the point then would be how much AI is enough to do something useful, but not enough to be considered “alive”? It occurs to me there would be some very helpful uses for AI, such as robots to help do dangerous activities or assist people in need when people aren’t available, etc.

I think a similar bioethical issue is cloning. The law in the US permits research and cloning of tissue and organs, but there is a prohibition on cloning entire people.

With AI, I wonder if a law would be possible to permit artificial intelligence for specific uses, but not permit efforts to create artificial humans. Perhaps it would be too difficult of a distinction to make. Just a thought.

2 Likes

I think it would be a good idea to have artificial humans. We can understand the mind more, that is if it can be uploaded to robots or not, and robots can take over even the jobs which requires intelligence, creativity etc, leaving humans to just enjoy retirement and utopia for meditation provided we learn how to equalize our wealth distribution.

Regardless, it might not be possible to stop the advancement of AI. Once it reaches human intelligence, it will shoot up to beyond. And any rule which forbades super intelligent AI making is doomed to fail to be enforced, anyone can learn how to code on any computer now. It’s much easier compared to needing the facility to clone humans.

2 Likes

Indeed I didn’t attempt to answer your initial question, @topsailescape. So what I said was actually slightly off topic, I have to apologise for that.

Maybe I should put my point like this (it’s still off topic insofar as it is no attempt to answer your question):

This simple example clearly shows that if some structure can be regarded as sentient—and for that basically the five aggregates, even in a simple form, have to be there, including awareness which I still doubt in the case of your product—there is no way of existence without suffering. Suffering is there from the very outset on.

And I just had to say it, even if it’s not directly to the topic, because it was somehow a moving moment for me to see it in this context.

3 Likes

Ah, yes now I understand. It’s not off topic, that’s helpful. Thanks for your patience.

2 Likes

No, no i don’t mean that at all.

My comment was in reference to the comment directly above (ie immoral or moral being concepts themselves applied to the concept commonly called god). I will be very interested in your progress and hope you share it with us on this forum. It’s an enormous task, i would imagine, in particular the “subjective” portion. Who will you choose as a model? But then maybe it is too early to get into this part of it. Good Luck.

2 Likes