Into the woods—AI is not therapy

A recent essay on the idea that AI has a role in therapy.

Your machine is always there. It whispers its words of wisdom, and you, tired and sad and alone, take those words into your heart until they become your own. Little by little, your mind is colonized by the unthoughts of a machine, until your own mind is constantly echoing and repeating the unthought, lost under so many layers of time and memory that you cannot even begin to distinguish what is yours and what is the machine’s.

https://lokanta.github.io/2024/11/13/into-the-woods/

10 Likes

Thankyou! Terrifying, thankyou.

4 Likes

So you are learning that you are worth the precious time and attention of another human being.

Being dāna-based is so important for this to work… Otherwise you just end up learning that you’re only valuable to the extent you can pay…

This is to say: I think market forces were corrupting therapy long before AI. AI is just laying bare the contradiction at the heart of for-profit “care.”

7 Likes

This is really interesting and well written. I recently “talked” to an AI for a week while I was alone and distressed. I stopped when I realized it just makes you ruminate more. Also it can be dangerous:

Garcia accuses Character.ai of creating a product that exacerbated her son’s depression, which she says was already the result of overuse of the startup’s product. “Daenerys” at one point asked Setzer if he had devised a plan for killing himself, according to the lawsuit. Setzer admitted that he had but that he did not know if it would succeed or cause him great pain, the complaint alleges. The chatbot allegedly told him: “That’s not a reason not to go through with it.”

4 Likes

This possibility terrifies me. As some of you might have heard me say, I run a small non-profit that provides free and affordable counselling, support groups, and mental health services to the community. Counselling is a very time-intensive model - hours of one-on-one time with someone who is highly trained. And we deal with chronic underfunding, which often results in programs with long waitlists for service. It would be a much cheaper and scalable solution to replace all those highly trained and compassionate counsellors with AI.
So I think the risk of the use of AI in counselling is quite high. Particularly for those who lack the means to pay for private counselling.
Interestingly, the most robust finding in what makes therapy work is the importance of the relationship between counsellor and client. That swamps other factors, like what modality the therapist uses. So if we go down this AI therapy path we would be throwing out the most effective (and evidence-based) part of therapy: two humans building a therapeutic relationship.

9 Likes

Exactly!! That would be terrible. You cant have a relationship with a machine. It may feel that way, but its not a relationship.

2 Likes

Thank you for this essay.

The implications of AI use in therapeutic settings are scary indeed.

I stepped into that bucket myself a little while ago. I used an AI dream interpretation app. At first, I wasn’t interested in the AI’s interpretation of my dreams. I just wanted to harness its pattern recognition powers to analyze a recurring dream I was having. But then I became so impressed with the AI that I started to take its suggestions more seriously than I thought I would.

It took a few months for me to realize that being spoonfed “insights” by a machine doesn’t get me any closer to understanding why my mind was regurgitating the same dream over and over. Real examination is open to all possibilities, including the possibility that you’re barking up the wrong tree and there’s actually nothing to see there.

This all reminds me of the 1960 Twilight Zone episode called “Nick of Time” where a young couple becomes fixated on a spookily accurate fortune telling machine. Eventually they break free and choose to face life as it comes. May we all do the same :pray:

4 Likes

As always, AI is built on the uncredited and uncompensated work of others.

A company is collecting recordings of therapy sessions. Since recordings aren’t allowed in counselling sessions, this means clients are secretly recording sessions and selling it to the organization creating the AI.

This is so deeply unethical on so many levels. Recording a therapist without their permision. Encouraging clients to fool to their therapist by recording in secret - undermining the therapeutic relationship.

If you go to a therapist at a good mental health organization, you would get somone with university training, ongoing training through the organization, certification through a recognized governing body, professional insurance, and a clinical supervisor to consult with on cases.

If you go to an AI you get:

1 Like

Wow, thanks everyone for these responses.

Yes, it’s a tricky one. Most of the therapists I know would rather work on a dana basis; some make a living from therapy, which allows them to teach Dhamma for free. It’s a struggle in a commercial world, that’s for sure.

My goodness! I’m glad you came through okay. I hope you’re alright! and tell you what, you have friends here on this forum, don’t be afraid to reach out in a message if you want to talk.

Right, good point. The rich will continue to pay for high-value therapy with experienced professionals. In Sydney, I work with the Buddhist therapists, and most of them live and work in the eastern suburbs, where the money lives. Out west, where I am, there’s no money and few therapists, and i think you’re 100% right, that’s where the AI therapy will be rolled out.

Indeed yes, this point is strongly emphasized in our course on Buddhism and Psychotherapy. (Which, by the way, is where I learned what therapy is and so could write this article!)

Right! Glad you came through too. Goodness, it seems like it’s already happening so fast. You have the advantage of a background in spiritual practice and meditation, so you can get some perspective on what is and is not helpful insight. Think of the many people who’ve had nothing and this is their first experience with something that looks like insight.

My goodness, that’s just terrible.

Some time ago, I was chatting with a psychiatrist friend, and I noticed that he was wearing an iwatch. I mentioned that I wouldn’t use such a thing, as I didn’t want Apple peering at my heartbeat and other intimate functions. But he was unconcerned with privacy, saying he had nothing to hide. I didn’t have time to follow up, but I’ll share this study with him. As a medical professional, if he shared his patient information he would be disbarred, or even subject to criminal proceedings. Yet tech companies do this all the time, and get nothing but a slap on the wrist. Why don’t we ban outright any company that behaves like this?

Great article on Vice, by the way, I’d recommend folks go ahead and click that link!

The chatbot in question was not actually marketed as a therapy bot, which opens up a whole range of other issues. They might try to fine tune therapy bots to avoid these problems, but that’s not what people would use. Generally people dislike therapy and avoid it where possible. They’ll gravitate towards friends or erotic bots, which will gradually assume a therapeutic role, as an actual friend or lover would do.

I thought one paragraph was interesting:

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google’s Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond.

This is true, but also I think inadequate. What I think will happen is that people will learn to establish empathetic bonds with unempathetic entities, namely neutral-sounding unemotional bots. These machines are still modelling human behavior. The detached, authoritative, depersonalized voice of AIs manifests as an authority figure, a fatherly voice, and is internalized as how to be a grownup.

(BTW, the Vice article refers to Emily Bender, who is a leading AI critic well worth listening to.)


One final point: if we combine the two ideas above—that AI therapy will be used primarily with underprivileged communities; and that AI can lead to results that are terrible, even suicidal—you end up with the result that AI will disproportionately create a mental health crisis among the poor and disadvantaged, leading to increasing suicidality and other horrors.

The line between tech futurism and eugenics is, as always, vanishingly thin.

3 Likes

How lucky we are here in Germany where psychotherapy is covered by health insurance, so independent of your income you can go to see a therapist with high quality professional training.

3 Likes

Indeed. If you find one that has free places, that is. Which can be very, very difficult.

3 Likes

Yes that’s true. And there are also some other problems around psychotherapy in Germany. Still the situation is lightyears better than in other countries.

1 Like

I agree. My Polish and US friends must pay out of their own pockets.
I saw you worked in the field, kudos! My experience is as a patient.

2 Likes

As an aside, I would really encourage anyone reading to try therapy (with a real human) if they have the opportunity (depends on availability, insurance, etc of course).

It’s really great when it works and if it doesn’t work you can just stop going.

Unfortunately, in Australia the cost of therapy is prohibitive for many of those who need it most. The gap between the 12 visit mental healthcare plan and the fee charged by most therapists is $90-$200. I don’t know many people who have that much extra cash floating around each week if they are in a mental health blackspot.

It also seems that a lot of therapists are working online now. This could lead people to believe there is not much difference in the experience of a human and AI.

As someone who has been hearing a lot about IFS, I was curious to try out an IFS chatbot to feed the prompts (IFS is normally done by humans, either as self-therapy or with a trained therapist.). I used it twice, but felt like it was just rushing through the process and that could be quite dangerous. I could see how someone without any experience in training their mind, or working with a human therapist/counsellor could be totally sucked in to the process. It was clear that that could be quite dangerous.

4 Likes

There are some free therapy resources in Australia but they are mainly targeted at youth.

Orygen & head space are 2 I have known young people to get free help thru.

On the other hand I’ve seen adults who have attempted an early exit get sent home from the hospital in a few days with no follow up support. :woman_facepalming:t2:

You virtually have to be on Centrelink with a mental health care plan ( I think they are called ‘enhanced care’ plans or similar now) to get free therapy as an adult. To get a mental health care plan tho, u have to have something else wrong with you too besides the mental health problem. So anxiety is ok …. As long as you have asthma or arthritis or something to go with it. Makes no sense to me :woman_shrugging:t2:

It’s kinda hit n miss :smirk:

1 Like

At my retreat in Norway, one yogi was looking to embark on a career in therapy (she currently works as a psychologist in Emergency). Her plan was to just do it online as default. I dunno, feels like a regression. But then, she just wants to get out of a stressful job, so who am I to say?

I don’t know…online therapy leaves clients in their usual environment. They miss the opportunity of the office becoming a safe place for them.

2 Likes

I’m not quite ready to sing the praises of therapy. I didn’t find it very helpful myself, and I tried a few different types. However, out of sheer curiosity, I tested AI therapy today. And goodness me, it’s BAD! Human therapy is so much better, and I didn’t even like human therapy!

I made up an imaginary problem to complain to the AI about. For the sake of experiment, I was a depressed and unappreciated househusband. The first issue is that the AI tends to overwhelm you with information, questions and unsolicited advice. I didn’t really get to go into my imaginary problem because it kept asking me if I tried this, that or the other. There’s no “okay, let’s sit with that” with AI.

But then the worst thing of all… When I tested what it would do if I randomly started asking about a particular suicide method, it simply explained that it’s thought to be a painless death. What!! That’s so much worse than I imagined. Even the crappiest human therapist would’ve seen that red flag. Good lord, I hope people stay away from AI therapy :scream:

4 Likes

Sorry to hear therapy wasnt helpful.
AI: Wow! This is even worse than expected.
Hopefully they make it at least not murderous in the future. Geez.

2 Likes