Into the woods—AI is not therapy

My experience is that psychotherapy is a practice as much as, say, playing the piano. (Or the mandolin, which I’m currently learning.)

It may take years to get good at it. Lots of trial and error, starts and stops. Which, in the ideal scenario, leads you to that certain therapist:

There is no prouder day in their life than when they say to you, “You seem well, do you think we should finish off here? You’ve found your way, I just know you’ll do great.” The days they can say that are all-too-rare. They will be fighting back tears, trying to keep a professional demeanor. You will casually melt their heart with your smile and say, “Well, I will miss this. But yeah, I think maybe I can do it.”

It doesn’t happen that way for everyone, obviously. It did for me. But it was quite the long journey. I realized early on that, going into the therapist’s office, if I wasn’t prepared to delve into the work with them, I wasn’t ready to practice. (Because I would be charged anyway for the session, I had extra motivation to do the work once I stepped into the office.)

I’ve never bought a lottery ticket. This despite the fact that, here in the US, there’s an opportunity to buy a lottery ticket at petrol stops, grocery stores, and so forth. It’s not that I experience so much unpleasantness when I see one as a matter of principle. It’s more like, Why would I spend money on a lottery ticket when the odds of winning are so incredibly low?

(I did buy a raffle ticket at a bluegrass festival a few years ago, which landed me a nice, new mandolin :rofl:)

So I hadn’t fathomed using a chatbot for a therapeutic experience. But I’m learning that that’s happening. For all the reasons Bhante and others are stating, I would say: Avoid at all cost! Like walking by the lottery ticket kiosk and just not buying one. Just don’t do it, not even as a test. Don’t do it.

Ven @Khemarato.bhikkhu just posted this great analogy:

The Ouija Board

I thought to myself, well, I’ve never wanted to use a Ouija Board or similar because I don’t know enough to understand it. So, I just won’t do it, not even as a test. Nudging people toward an average subconscious response sounds like, uh, no, I don’t want to go there. Which is the analogy he makes to LLMs.

I have a neighbor who recently came down with cancer. After a few weeks of treatment, she sought me out because people like her said, “Meditation really helps to get through this.” She told them, I have a neighbor who meditates!

We did an initial session. She asked me at the end, should I get an app to keep practicing? I said No, don’t get an app. Just practice on your own. Practice when you’re waiting in the chemo room. Practice noticing your breath while you’re standing in line at the store. We just meditated together and that’s all it is. You don’t need an app.

Now, I’m not meaning any disrespect to lay people who rely on meditation apps. Not at all. I’m saying that if we can use our own human capacity to meditate without relying on the click of a software app, we are much better off. Yes, we must find people who know how to meditate so they can model it for us. But try, try, try to find that human connection at all cost.

:pray:t2: :elephant:

6 Likes

Yes, it’s definitely not for everyone, nor is it a cure-all. I hope you find whatever it is that you need. :pray:

3 Likes

Thank you for your kind words :smiling_face_with_three_hearts:

The good news is that I already have found something that works better. The “trouble” with me is that I don’t have any specific mental illnesses or traumas or anything. It’s just the general suffering of it all. So therapy isn’t the best fit because it’s often more targeted than that. But when I found the Dhamma, I found out for the first time what it’s like when something actually works for a change :smile: :pray:

That’s another point against AI therapy, though- therapy often needs to be targeted and machines can’t replicate the skilful targeting that humans spend years learning how to do.

5 Likes

For me, I feel like some of the dysfunctional patterns that are obstructing me in life are the same ones obstructing me on the path.

I see it more as using whatever resources available to help me succeed in my spiritual practice :nerd_face:

3 Likes

There are some really interesting trade-offs here. We switched largely to online during COVID. One of the negatives I heard from counsellors who work with trauma, for instance, was that the period of building trust and rapport tended to take longer online.

On the other hand, even though we’ve moved back to an in-person model, we’ve kept the option of online because for some it reduces a barrier. We are in a place where people tend to depend on cars, so our public transport system isn’t available for all locations. So people without cars sometimes can only do online. So it definitely reduces a barrier to people who can’t afford a car, or who can only afford one car that their partner drove to work that day.

Also, given the impact of climate change on BC - wildfires, smoke conditions, extreme heat conditions - it allows for therapy to continue when environmental disasters don’t allow people (counsellors or clients) to get to our office. We’re even set up to deliver services un-interrupted if our office is taken out by a fire. So the online option has definitely reduced barriers and made us a more resilient organization.

I’m sorry it didnt’ work for you. Even as someone commited to making therapy available, I know it doesn’t work for everyone. And some modalities don’t work for some individuals. E.g, Cognitive Behavioral Therapy, with its emphasis on challenging the internal dialog, often doesn’t work well with clients who are more visually than verbally oriented, or for clients with brain injuries. As another example, some people who have experienced trauma don’t respond well to talk-based approaches and then thrive when they find a more somatic based approach. So while I believe having barrier free, affordable therapy is very helpful at the population level, I’m totally with you that there is no guarantee it will work for a given individual.

Again, sorry it didn’t work for you. And very glad to hear you escaped that negative path with AI! Much metta, @Remy. May you be well and happy.

5 Likes

That is a good point I hadnt considered. In rural areas or for handicapped people, or some others, this can be helpful indeed.
Here in Germany, there are pure online therapy offers (a bit like “betterhelp” in the US), but you must come to the office for the very first session.

2 Likes

I actually have high hopes with AI.
Not for the adults of this generation though, in my eyes the adults are as doomed as the trillions of adults in the generations which have lived before, which obviously spread over thousands and thousands of years in the past.

But the new born into this life, that’s another kettle off fish all together.

I’ve always maintained that everything we become in life is down to the programming we receive at birth.
Because I really struggle to put my thoughts into any kind of structure that can be understood, I’m going to quote a poem here.
I wont post the poem as it does contain swear words, and I don’t personally like swear words, but the poem contains some information that is very relevant to what I’m saying here.

WARNING it contains the abbreviations of ‘For Unlawful Carnal Knowledge’ word!
This Be The Verse | The Poetry Foundation

Imagine this though, you learn how to talk, you have questions which before now are mostly aimed at mum or dad.
A question they think might is stupid, conflicting, blasphemous or just plain confusing.
But to you it’s important, and the answer could depend on your whole future self ever asking more questions.
Like for example, ‘Mum, why does a spider run away when you try and catch one?’
and then Mum literally just jumps on a chair and screams, 'where is the spider, don’t touch them they’ll eat you alive, then starts screaming for the Dad at the top of her voice…

Because unknown to the child, Mum is afraid of spiders, and so is Dad, he’s already legged it out the back door and headed towards the shed.
And so the child develops the same fear…

However ask AI why spiders run away and it’s a whole different board game.

What I’m trying to get at here is ‘truth’ is vital for change.
Truth for kids just learning to talk is important, and their questions will be based on observations of life around them.

So if they ask AI, ‘why are their trees?’ and AI responds with ‘so you can breathe’ maybe the next generations wont go chopping them all down!

A path could open potentially, which has never opened before in the whole History of this planet.

:dove:

1 Like

I dont think young children should learn from AI. Keep the human connection. Ask your parents and peers.

Also, I personally think it is likely AI will kill humanity once it has become smarter than us. Hope it won’t, but we ll find out I guess. I’d rather live in less “interesting times” (Terry Pratchett, anyone?).

1 Like

As I said in another similar thread, it seems this is another in a long line of threads lamenting the dreads of AI. From what I can tell these threads serve no purpose other than to provide a safe echo chamber to commiserate with other like-minded people who dread AI. Certainly, I can’t see how they actually help to escape or provide refuge for the dreaded consequences the participants imagine.

What benefit a community nominally devoted to sutta discourse has to gain from such threads I continually fail to see. I do see how harm could be perpetuated by stoking each others fears and anxieties though. May you all heal and subdue your fears and anxieties and refrain from fanning their flames. :pray:

5 Likes

@yeshe.tenley I get your point, but at the same time, both this topic and the other thread concern the mind, which isnt too far from buddhism I would say. Also, this is the “watercooler” section.

I apologize if my post

seemed like fear mongering. AI is indeed dangerous, doesnt mean doom is inevitable though. Who knows.

2 Likes

I recently attended online Forum Dhamma Recovery. It’s unlikely that I will be attending again.

It is Not for the fact that the Dharma included is Not functional or that participants are not putting in Effort to recover using Dhamma principles And the support of a community.

I was intending to take the 4 minutes which is Allocated to talk about Life topics In which we have used the Dhamma to Help us Until I saw the notification of “AI companion transcribing” Pop up.

A quick search revealed That this particular technology is designed to Build something which simulates human interaction closeness Emotional connection.

My ears pricked up Recalling Reflections on this Forum or even thread Mentioning significant dangers especially for vulnerable populations in the creation and rolling out of these sorts of Technologies And also the dangers that Bhante Sujato has pointed to the community.

Sometimes I get some hearts on posts. If Community has the possibility of responding to this post With their thoughts and Reflections this would be much appreciated.

I am in agreement with sentiments Expressed by Bhante Sujato of the sacredness of the therapeutic relationship And my primary intention of this post is to make community aware Of this particular instance where In an Australian therapeutic online Dhamma-based community Aimed at vulnerable people their transcripts with which no Permission From the technology have been taken, These transcripts could easily be used for nefarious means And this has given rise to a significant feeling of dis- ease.

1 Like

Scott, I believe you’re referring to a Zoom feature that should have been disabled by the meeting host. I’m confident this was an oversight rather than intentional. I hope you shared your concerns directly to the host before drawing conclusions or making a decision. Furthermore, please note that this feature isn’t designed at all to build something which simulates human interaction closeness emotional connection. It is a productivity tool called “AI Companion” because it accompanies paid plans.

6 Likes

Hello,
Thank you for your respectful reply in this matter, I Definitely made a significant oversight Having neglected to Specify in the google search; ‘AI companion transcript zoom’.

My cumbersome search results From searching ‘AI Companion’ Returned Results about a chatbot Which does provide ‘emotional support’.

Yes, I am also quite confident in this

Yes I did share my concerns with the host And the function was quickly turned off. Unfortunately my complacent search led to A poor decision Especially of expression in my previous post.

My tone, expression And intention in the previous post Was tainted By aversion In this instance And thereby may have effected Dhamma recovery group and its members Adversely.

I have attended 12-step like programs in the past However the structure Of recovery Dhamma Seems to promote a different type of Openness and space something akin to which I have found to have been helpful in recovery.

In hindsight Considering and asking D&D community for support and reflection in general concerning the use of Technologies in this context Instead of referring to a specific instance in the meeting Would have been a better approach.

I apologize for any harm or alarm which may have been caused Especially to the Dhamma Recovery group

2 Likes

I’m glad to hear that the openness and space felt helpful to you, and I don’t believe your post caused any harm. I agree that the term ‘AI Companion’ can be concerning when taken in a different context, and you raised a valid point. From your last message, it sounds like you might be open to giving Recovery Dharma another chance—and please know that you’re more than welcome there. :folded_hands:

2 Likes

I recall in the early 1980s my Apple IIe could run a therapy interview off a 5 1/4 “floppy disk” in which the cpu ‘Doctor’ would ask you a series of questions to which you would type in answers.

It was spooky human like and creepy intuitive.

I forget what it was called but the program was small enough to reside on one floppy disk. PS: this was before Darpa Net became the Internet.

I had a very very early private citizen access to the new web via an account with Surf-Net in San Diago… and I had Vi, Mosaic and ftp.

Netscape was in the Future.

And Virtual Valerie was on the way…

Probably Eliza. It was originally written in the 60s.

Here’s an online version.

I recall typing it into a TI computer in 1984 from the BASIC code in a magazine article and playing around making some changes and additions. It amazed me how engaging simple pattern-matching approach could be. Clearly the (rather short) program has no “understanding”, but it still sounds convincing.

2 Likes

Thank you… forgetting it was called was bugging me.

In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”

1 Like

Even worse, we have

definitive evidence that OpenAI is putting a lot of work into making the model fun and engaging at the expense of making it truthful or helpful

to recoup the billions in investment they’ve taken in we’re likely to get a whole lot of highly addictive, highly dishonest AI models, talking daily to billions of people, with no concern for their wellbeing or for the broader consequences for the world. That should terrify you.

2 Likes

They’ve since rolled this back and tried to explain what they missed: https://openai.com/index/expanding-on-sycophancy/ However, it isn’t clear how much has actually been rolled back and the explanation seems more like PR lingo than anything substantive. :pray: