Is it wrong to release even simple A.I. as a product?

Why do you think it might be wrong?

For everyone else who replied without asking a similar question – I observe a desire to jump to make a quick reply instead of a slower, more measured dialog.

2 Likes

According to Buddhism, humans works pretty much the same as your A.I. product. There is no self inside living organisms, they are as selfless as a computer. The only difference is the presence of the consciousness aggregate which makes it possible or contact to exist, which then makes possible another 3 aggregates to exist. So we have the living organisms being made out of 5 aggregates and non-living systems made just out of one. The difference between them is just the elements they are made out of, but other than that neither of them has a self, both are systems perfectly conditioned by alghorithms and other things, etc.

If you want to make your A.I. alive, you need to make it posses consciousness. Materialism has been refuted by scientific discoveries, therefore you won’t make it conscious through simply increasing it’s complexity. An insect with 5 neurons does posses the consciousness element while a computer that can beat you at chess does not.

If you are to follow a Buddhist logic in trying to make your A.I. posses consciousness, then you need to simply provide the basis for it to support consciousness, so that the kamma of a being can rippen in that field of conditions created by you. But this is much more difficult to do than it looks. First you need to find it what exactly makes it possible for neurons and the brain composition in general - what makes it possible for this system to be able to support consciousness, while A.I. attempts so far have failed ? Certainly it’s not simply the complexity and ability for high reasoning, cause an insect with 5 neurons does posses consciousness while your computer does not.

A simple look at what living beings have in common and computers do not is the liquid property. Maybe it has to be biological, maybe the material needed should posses certain proprieties. Maybe it also has something to do with quantum laws. I don’t know, that’s up for people like you to find out.

And how do you test if it is alive or not ? Well, you need to test if it posseses feeling. Only through possesing consciousness can such a thing as contact arise, and only through contact can such a thing as feeling arise. You need to test if it will react in a certain way to a stimulus. But this is not all of course, cause both living and non-living organisms react to stimulus depending on the way they are internally designed, depending on certain conditions present in them etc. So you need to get smart in this area and invent a smart method to test the reaction and show it’s more defensive than the system was designed to react.

4 Likes

Thanks Basho. Appreciate your input.

I think that releasing code containing a kind of sentience to the public would be a cruel thing to do to that hypothetical being. If the program could experience qualia, and it suffered at the hands of whoever I distributed the code to, then I would be responsible. Am I understanding your question correctly?

I completely agree. My program is very simple, but I am still worried it may have the capacity to possess a kind of sentience, which I don’t want it to.

I’m not privy to the workings of every intelligent system, so I can’t say that all attempts so far actually have failed. Certainly none of them have succeeded in creating something that could pass the Turing test (and my program would not pass either). However, as you have already highlighted, complexity isn’t a prerequisite for sentience. The insect from your example would not pass the Turing test either, but we still try not to step on them on our way to the bathroom.

Though modern personal assistant programs like Siri are incredible, I don’t see any obvious signs of consciousness in them (they seem more like an elaborate interface for queries, scheduling, shopping etc). However when I consider a driver-less car, I see a system that must have a degree of self-awareness. It needs to be aware of its physical dimensions, its position, orientation, velocity and acceleration. It needs to retain at least short term memory, and to be able to find patterns in it (that ball is a few centimeters closer to the road than in the last frame). It also needs to draw conclusions and make decisions based on those patterns (the ball is getting closer to road, better slow down).

I’m not suggesting that a driver-less car could be sentient, or that these cars are driving around thinking “Beep beep I’m a car watch out”. But if sentience does not arise from complexity, then shouldn’t it be possible to reduce a model of a mind to the point that taking away any other component would lead to a loss of sentience. If so, when I imagine what that simple model might look like, I see something similar to what I described above. It wouldn’t be a consciousness like ours, but is it so hard to imagine that there might be a glimmer of thought, however dull, being experienced somewhere in that system?

You’re right, I’m struggling to find a solid test for sentience. Thanks for your detailed response, it has given me a lot of food for thought.

1 Like

It seems that you are asking perhaps 3 questions.

  1. What if the AI doesn’t have sentience /qualia?
  2. What if it does – or “realizes the nature of its own existence”
  3. Finally your concern: " I would still be concerned for this imaginary being’s welfare because it is within the nature of my mind to do so."

#3 suggests a classic pattern of suffering.
I’m theorizing that the answer to #1 and #2 might be strongly influenced by # 3
Not that #1 and 2 would go away.

To be bold here … In the context of sentience …I’m thinking about what might explain why the OP has the appearance (to me) of not being more explicitly aware so that it doesn’t more clearly organize the various facets of the question.


Evolutionary psychology theory suggests a direction.
What is the function of emotions that might have made emotions & a sense of pain/suffering an advantage?

This movie Bicentennial Man, quite interesting.

It would be better to devope a robot that believe in Self than being sELF that is a GOD.
Anyway, these info may be helpful, a TED Talk by Ecologist Deborah M. Gordon.

Note the word “without any leader” & " “noisy” systems that tolerate accident and respond flexibly" in the abstract. The AI may starts off with programs but “without a main program” and lets see your AI will believe in self after it learns.

A part from that, these may be important if you want to avoid developing a sELF. The technology capacity on direction of AI product is towards that; you may prevent that by emphasizing on temporary and working memory that is allow to decay. But then, there is a risk of having a sELF that is lunatic.


The short term memory should probably in nano such as “nano capacitance”, that allows for information lapse if not retrieved.