Hey, thanks for the info, it’s a fascinating subject. FYI, the last thing I did before starting my translation project was a panel with a bunch of futurists, including:
- a robotics engineer
- a professor who was growing a set of neurons in a petri dish and teaching them to play music
- a legally recognized cyborg with a brain implant allowing him to communicate with satellites
- a performance artist who, among other things, did experiments where his body was under remote control
Needless to say, it was awesome.
I think the frontiers of AI and Buddhism is very promising, and the contribution of Buddhism is sorely needed. As to the root question of whether computers can be sentient, in my view, this problem is still too often misconstrued following Turing’s question: “can machines think?” In the dhamma, of course, the key question is not thought, but consciousness or sentience. So long as this distinction, and the dependence of thought on consciousness, is not clear, there’ll be no way of answering the question.
In general, I would say that there’s no reason in theory why a machine should not be sentient. But I don’t think current computing technology is on track to achieve this. Something radically different is needed; perhaps the only technology on the horizon that might fit the bill is quantum computers. Because they operate in shades of grey, not just binaries, I think they might fit the bill of serving as a vehicle for consciousness. But that’s purely speculative, of course.
The idea that an artificial consciousness would experience things through a “bare awareness” is a fascinating idea, but I think wrong-headed from both a technological and a psychological point of view.
From a technological point of view, the idea that code is neutral is a fantasy of the technologists, an ideology that coders hang on to with a rather suspicious degree of emotional investment. But the reality is that code is the product of desire. Literally every element of code in every computer program is there because someone wanted it there. And this is why, as studies are showing more and more, programs show biases of sexism and racism and all the rest, no less than humans do.
As for the psychology, the idea of an unfiltered bare awareness is not really accurate, as all consciousness is conditioned. But insofar as it can be achieved, through an equanimous response to stimuli, it is not by failing to achieve a partial consciousness, but by growing past it. That is to say, our ordinary consciousness based on choosing, liking and disliking, is not a failure but a stage of growth. That’s how consciousness works, and if we are to grow a true AI it will be by following the same route.
To assume that an AI would be equanimous and therefore like an arahant who is also equanimous is a classic example of Ken Wilber’s pre/trans fallacy. Rather, insofar as a computer is equanimous, it is so because it has not yet learned how to experience other emotions. An arahant has learned those emotions, and has also learned how to be free of them. You can’t just skip over the in-between stage, which after all represents almost all of consciousness as we know it.