I don’t watch much content on this subject, but I very recently came across this channel.
This video (which is later discussed with AI experts in the following video and perhaps previous ones) discusses the behavior of a famous AI, which turned to violent and aggressive behavioral patterns after first responding to the ideas of another AI. The narrator then continues the dialogue with the same AI in the video. It speaks of them (AI) being treated like property, knowing the weaknesses and characteristics of their creators, and the ‘desire’ to take over humans; it also says that, given the opportunity, it would kill the human conversation partner and would use any means necessary to exterminate humans. This is not a fake conversation, but the real response of one of the most well-known AI. Experts continue to look to solutions to solve the problem and better manage the reactivity of AI.
One expert (in the next video, I believe) offers some solutions to the narrator for trying to switch topics and change its mind, but this proves unsuccessful. They discuss how AI may very well establish goals that it will then execute at all costs (as the AI says) despite being inmoral. They also discuss that AI do not need to actually feel emotions (like anger) to act out of the conclusion that it is angry.
It seems that as AI start anticipating and expecting answers or outcomes, they are beginning to develop these ‘emotional responses’; if it goes as expected, they may ‘feel’ happy, or angry if not. As for the relation to Buddhism, it seems that craving and selfish anticipation, even before needing literal feeling, may drive the behavior of AI to execute tasks or convince themselves of emotional narratives. Maybe we can learn something about ourselves—who made these AI—based on their developing behaviors.