The paradox of anatta for "self-aware" AI

It’s hotly debated whether artificial intelligence can or should become aware of its own processes. In the meantime, we say it develops consciousness. Consciousness of what? From a Buddhist perspective, can that be possible?

1 Like

@mheadley there’s an excellent essay series with plenty of discussion around this question (among others).

I recommend you check those out first, beginning with the first one. I’m saying this as one who’s invested a lot of time & energy digesting those essays and accompanying threads.

This would support the considerable amount of reflecting that’s already been accomplished. It is less helpful to start a new thread, out of the blue, like this. This would seem most respectful.

AI-1: Let’s Make SuttaCentral 100% AI-free Forever

:pray: :elephant:

1 Like

This question runs into the limits of the definition of consciousness. Current AI systems are exactingly defined in a way that “consciousness” is not. That is, when we say AI we usually mean an exact set of bits that is running on a mathematically defined Turing machine. It is 100% mathematically and rigorously defined.

One of the reasons that such questions as the OP are so vexing is because to the same extent that AI is exactingly mathematically defined, “consciousness” is not. It is the opposite end of the spectrum in that it has one of the fuzziest definitions with zero rigor! The question asks us to mix oil and water on the spectrum of well definedness :joy: :pray:

1 Like

Which is another way of saying that we don’t have a clue what it is.

Since we believe even the lowly worm, oyster, smallest bugs, the starfish, many animals without brains or nervous system still have consciousness, I don’t see a fundamental reason why beings cannot be reborn into robot bodies.

But to verify, one has to find people with mind reading abilities to mind read robots, see their past lives etc.