Wow, thanks everyone for these responses.
Yes, it’s a tricky one. Most of the therapists I know would rather work on a dana basis; some make a living from therapy, which allows them to teach Dhamma for free. It’s a struggle in a commercial world, that’s for sure.
My goodness! I’m glad you came through okay. I hope you’re alright! and tell you what, you have friends here on this forum, don’t be afraid to reach out in a message if you want to talk.
Right, good point. The rich will continue to pay for high-value therapy with experienced professionals. In Sydney, I work with the Buddhist therapists, and most of them live and work in the eastern suburbs, where the money lives. Out west, where I am, there’s no money and few therapists, and i think you’re 100% right, that’s where the AI therapy will be rolled out.
Indeed yes, this point is strongly emphasized in our course on Buddhism and Psychotherapy. (Which, by the way, is where I learned what therapy is and so could write this article!)
Right! Glad you came through too. Goodness, it seems like it’s already happening so fast. You have the advantage of a background in spiritual practice and meditation, so you can get some perspective on what is and is not helpful insight. Think of the many people who’ve had nothing and this is their first experience with something that looks like insight.
My goodness, that’s just terrible.
Some time ago, I was chatting with a psychiatrist friend, and I noticed that he was wearing an iwatch. I mentioned that I wouldn’t use such a thing, as I didn’t want Apple peering at my heartbeat and other intimate functions. But he was unconcerned with privacy, saying he had nothing to hide. I didn’t have time to follow up, but I’ll share this study with him. As a medical professional, if he shared his patient information he would be disbarred, or even subject to criminal proceedings. Yet tech companies do this all the time, and get nothing but a slap on the wrist. Why don’t we ban outright any company that behaves like this?
Great article on Vice, by the way, I’d recommend folks go ahead and click that link!
The chatbot in question was not actually marketed as a therapy bot, which opens up a whole range of other issues. They might try to fine tune therapy bots to avoid these problems, but that’s not what people would use. Generally people dislike therapy and avoid it where possible. They’ll gravitate towards friends or erotic bots, which will gradually assume a therapeutic role, as an actual friend or lover would do.
I thought one paragraph was interesting:
The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google’s Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond.
This is true, but also I think inadequate. What I think will happen is that people will learn to establish empathetic bonds with unempathetic entities, namely neutral-sounding unemotional bots. These machines are still modelling human behavior. The detached, authoritative, depersonalized voice of AIs manifests as an authority figure, a fatherly voice, and is internalized as how to be a grownup.
(BTW, the Vice article refers to Emily Bender, who is a leading AI critic well worth listening to.)
One final point: if we combine the two ideas above—that AI therapy will be used primarily with underprivileged communities; and that AI can lead to results that are terrible, even suicidal—you end up with the result that AI will disproportionately create a mental health crisis among the poor and disadvantaged, leading to increasing suicidality and other horrors.
The line between tech futurism and eugenics is, as always, vanishingly thin.