Hello, I have a question about sentience/qualia. I am trying to write software that mimics very simple human behavior, but have started to worry about the morality of developing such a product. In my program, a ‘being’ exists only in a graph representing an abstract version of the world. The being has a number associated with it that represents its overall health. The nodes of the graph have associated properties such as color, material, temperature, etc. Depending on the node’s properties, it might have a positive or negative effect on the being’s health number. It has no access to sensors and no attempt is made to understand the world outside this graph.
The being is able to ‘see’ nearby nodes (or if the node has properties such as temperature or taste, it could ‘experience’ their sensation). Each time the program cycles, the being’s sight/experience is recorded in memory. Additionally, a ‘feeling’ (represented by another number) is also recorded. This feeling number is obtained by considering the beings current comfort (e.g. is it occupying a node programmed to reduce or increase the health number of the being). In addition to its immediate comfort, the being will also look for patterns in its memory that are similar to its current situation (by situation I mean its current node and the arrangement of neighboring nodes). For example if it was able to achieve comfort several cycles after this pattern occurred, the feeling number would increase.
The being then considers its liberties (that is, what actions it can take). It tries to predict what its situation would be if it took each action, and in a similar way as described above it will generate a feeling number for that potential situation. By comparing the feeling of each action and weighing it against the cost of taking it, it will select an action and execute it. The program then repeats.
For example, I could program a node to ‘flash red’ 3 times before a ‘pizza’ node is connected to it. If the being saw this occur previously, then ate the pizza creating positive memories/feelings, it might recognize that pattern and try to move towards the flashing node before the pizza appears.
Obviously this is a very simple model, and it will not pass any kind of Turing test. However I am not certain that a system needs to be complex in order to be sentient. My current thinking is as follows:
-
I think my consciousness arises from my physical brain, other people and animals have similar brains so they must be sentient too.
-
That said, I haven’t been able to identify a component or set of components that together constitute sentience.
-
Since it eludes definition, I’ve also considered that sentience, or more specifically qualia might not exist.
-
However, even if I accept that qualia doesn’t exist, I am still left with my initial problem: Would it be wrong to release a product including a being as described. Though I’d no longer regard sentience as a significant property, I would still be concerned for this imaginary being’s welfare because it is within the nature of my mind to do so. To make an analogy, a machine created to build cars wouldn’t stop building cars simply because it realized the nature of its own existence.
-
I have also considered that since this system is so simple, we could think of its entropy as being very high. You could say microbes are much more complex. Does it make sense to think so deeply about the welfare of this program but not about the microbe’s?
Sorry for the long post. I have a lot of respect for the thoughts of people here so I’d like to thank anyone for taking the time to read it. When searching these boards for similar discussions, I noticed Ajahn Sujato mentioned he was on a panel with experts who discussed similar issues. If anyone knows where I could find a transcript or video of that panel I would also really appreciate it!