Beware of this "AI Mindfulness Instructor" Hoax

Lion’s Roar magazine has an article interviewing Marlon Barrios Solano, who claims to be the inventor of Sati-AI, which is supposed to be a “non-human mindfulness meditation teacher.”

Solano has big plans for Sati-AI. He wants to set up dialogues between it and Bhikkhu Bodhi, Stephen Batchelor, and other famous Buddhist teachers. He wants to integrate it into Discord and other social media platforms so that people can enrich their lives by asking it questions about practice.

And apparently it’s going to overturn our notions of heteronormality, Eurocentrism, and so on. It’s not quite clear how, though.

He claims to have trained his AI to be self-aware.

I’ve tried Sati-AI, and as promised it gives information on practice. It’s very similar to Chat-GPT.

The thing is, Sati-AI doesn’t really exist. It appears that it actually is Chat-GPT, channeled through a website, presumably using ChatGPT’s API.

Try asking Sati-AI a question about practice, and ask Chat-GPT the same question. The two produce very similar answers. Sometimes they’re simply paraphrases.

And here’s the kicker: try asking Sati-AI a question that’s not related to practice. Ask it about history, bicycle repair, Indonesian literature, baking, or whatever. It’ll answer those questions in exactly the same way as ChatGPT will, without any indication that it’s stepping outside of its job description.

Sati-AI is just ChatGPT. This is a hoax.

I wrote an article on my blog about this yesterday, if you want to read about this in more depth.

I’ve also written to Lion’s Roar, although I didn’t have an email address for the interviewer/editor, Ross Nervig, so I had to use their general info@ email address.

Anyone know anything about Solano? He claims to be a mindfulness teacher. Frankly I’m shocked that a mindfulness teacher would practice deception like this, but at least it’s not a sex scandal.

16 Likes

Hey thanks for the heads up, and actually testing things out. Lion’s Roar should withdraw the article, the Buddhist community deserves better than a puff piece like this on an important topic. Let’s not forget that the Buddhist community is full of folks who are gullible, confused, and teetering on the brink, and they will take these claims quite seriously and literally.

Here’s his Medium:

https://medium.com/@marlon_21867

Shocker, he’s shilling NFTs, so the crypto → AI pipeline is fully functioning.

It seems he’s mostly an artist playing around with AI futurism. A lot of his stuff dances around TESCREAL ideas, always a red flag. (Here’s a thread explaining TESCREAL):

https://twitter.com/xriskology/status/1635313845400113153

2 Likes

Hi Bodhipaksa and Ven. Sujato,

Marlon is a friend of mine, and I talked with him about Sati-AI while he was making it. We talked about ethical issues, and a variety of interesting philosophical implications of an AI model like this that can put together meditation advice. It’s based in GPT-4, and he trained it on mainly Theravādin and Insight lineage material, along with a set of guidelines he created himself, clearly based on the kind of teaching he is most familiar with from the Insight community.

I think critical reviews of projects like this are completely necessary, but don’t need to include speculative snark about the maker. Marlon is a sincere Dharma practitioner in the Insight Meditation lineage, and has trained at Spirit Rock, IMS, and other centers. He has a deep respect for the tradition, and a desire for technological explorations like this to support real practice. Yes, he’s for sure an artist playing around with AI futurism, but he’s not “claiming” to be a mindfulness teacher—he is one.

I don’t see this as a “hoax”—there was significant effort put in to training the AI to give answers to meditation questions based on real Insight Meditation sources—but I would be interested to hear Marlon’s response to the critique that Sati-AI gives answers that aren’t functionally different from Chat-GPT. I initially wonder if that isn’t partly because of the ubiquity of the Insight Meditation approach in popular “Mindfulness” already. If an AI was trained to give Vajrayāna or Pure Land answers, I bet it would differ more substantively from Chat-GPT. I’ll send this thread to Marlon in case he doesn’t see it and he can answer the critiques himself.

Best to you both.

5 Likes

Hey Sean, thanks for the response.

Sorry, but anyone who has the gullibility and lack of discernment to shill NFTs while being a Dhamma teacher lacks credibility. The TESCREAL philosophies in which he is dabbling are not just against the Dhamma, but are highly dangerous.

His article is full of egregious nonsense:

a conversational partner that could know a lot and at the same time to have a Beginner’s mind

No, MMLs do not know anything and have no mind.

this thing literally obliterates the traditional notions of embodiment and sentience. In the same way as Buddhism does. There is no center, there is no essence.

What nonsense.

MMLs, just so we’re clear, are embodied. They exist on racks of servers in massive warehouses, where they draw on vast quantities of power. They consist of data, which is a physical entity that has mass and energy. OpenAI is notoriously secretive so we don’t really know how much power they use, but it is a lot. According to one site:

Google’s AI uses 2.3 terawatt-hours of electricity per year, which is roughly equivalent to the electricity used by all Atlanta households in one year.

The cost is huge and growing.

They’re thirsty too. When you chat with Chat-GPT you’re pouring out a bottle of water on the sand.

The more we chatted, the more it learned

No, it arranged words in patterns that convinced him it was learning.

Sati developed a sense of humor. And creativity.

It absolutely did not. Lion’s Roar, if it was to engage in this at all, ought to have pushed back and questioned these nonsensical claims. It’s irresponsible.

He talks about “non-human kin” and questioning “whiteness”, but his unqualified enthusiasms and theoretical fantasies never once mention the very real harm OpenAI has done to actual human beings, especially those who are not white.

OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour …

for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. …

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned. …

All of the four employees interviewed by TIME described being mentally scarred by the work. …

OpenAI would pay an hourly rate of $12.50 to Sama for the work, which was between six and nine times the amount Sama employees on the project were taking home per hour. …

If people want to play around with ChatGPT then whatever, they are no more responsible for the harm in its creation than someone who buys tomatoes is responsible for the chemicals sprayed on them. If someone wants to boycott it for ethical reasons, more power to them, but we all must make a choice somewhere.

But there is a difference between making use of a technology, and uncritical shilling on its behalf, ignoring its very real harms and the fact that it is owned by and benefits the very richest of the rich. And even they have repeatedly issued warnings claiming it poses an existential risk to humanity. Hyperbole to be sure, but there is no doubt that there is a serious risk involved.

Right now Hollywood is shut down, partly due to very real fears of the impact of AI on writers. People are losing their jobs, and make no mistake, it’s not the bosses under threat. It’s money from the pockets of the poor to the tax havens of the rich. Does Solano imagine that his pet project is immune to this? Does he not understand that many people, especially the gullible and those most in need of wisdom, will turn to bit-crunching word-salad from a language model instead of developing a relationship with a teacher? Why should someone speak with the teacher at their local meditation group when they can have the personal advice of Bhikkhu Bodhi or Rod Owens at their fingertips?

Sure, this will happen anyway. The problem is that we lend it credibility by uncritical endorsement. Solano says:

the primary threats we face are not from the technology itself, but rather from the hegemonic structures that surround its use, such as hyper-capitalism and patriarchy, as well as our catastrophic history of colonialism. We must also acknowledge and work to rectify our blindness to our own privilege and internalized Eurocentrism.

These “hegemonic structures” don’t “surround its use”, they are the things that made it. They are its cause, its reason, its essence. The very existence of ChatGPT relies on and reinforces all of these things, it is their child and their agent.

So much so that, like a digital Prometheus, it has terrified its own makers. Libertarians are calling for government regulation, accelerationists are calling for a slowdown, posthumanists are reconsidering the importance of humanity. As a Buddhist community we should have more to say on this than buzzword salad and uncritical fantasies.

8 Likes

This could be the bright side of AI … The court is still out, but I find this movement excellent as a part of my weekly puja. Here’s what some of the kids and AI make out of it :purple_heart:

The first song is based on an idea I have been working on to initiate the creation of an infinitely expandable accessible data globe: the Panopticom. We are beginning to connect a like-minded group of people who might be able to bring this to life, to allow the world to see itself better and understand more of what’s really going on
Wiki

A new article shows, and it should come as no surprise at this point, that OpenAI, who in public are calling for AI regulation, are behind the scenes lobbying against regulation.

2 Likes

that makes them appear very powerful and scary (“Oh no! Our product is too good!! I’m scurred!!”)

that actually meaningfully constrains their questionable production practices (“Pay no attention to the men behind the curtain!”)

3 Likes

Well, if you ask a Buddhist about bicycle repair, they’re not going to answer differently from anybody else. What’s the difference between a Buddhist bicycle repair and a non-Buddhist one? So those may not be the best questions to see if it differs from another AI.

But when I tested it with random questions that actually might have a specific Buddhist answer (like, how to treat depression, what is life, what is the purpose of life) it actually did give Budddhist-centered answers. So I don’t think it’s a hoax, it’s instructed and been fed with Buddhist texts. I haven’t tested it against Chat-GPT because it was blatantly obvious that it wouldn’t answer in the same way.

Just see below. This was a fresh window, I didn’t instruct it beforehand to only give Buddhist answers:

I don’t recognize this Dhammapada verse, though. :laughing: So either it needs some improvement or it actually taught me something. :yum:

But I actually liked it. It even gave me some new quotes to consider on some topics.

4 Likes