Anything positive to say about AI

Not a newcomer, but I wanted to ask: on this forum, it is prohibited to say anything positive about AI ?

3 Likes

Not really. :slight_smile:

SC-Voice on main website uses AI, there’s at least one AI powered tool of @abuddhistview’s Vectoral Sutta Finder posted and discussed here, there’s a recent Vectoral Pattern Matching analysis of suttas right now on front page.

It’s fair to say with Bhante Sujato’s AI articles there’s a certain critical opinion that leads the forum and the community. However, the debate is certainly allowed.

Your celebratory post is not censored, for example. :slight_smile:

4 Likes

Please note that SC-Voice is a separate application. It is based on SuttaCentral’s data for root and translation texts, and there are links from the main SuttaCentral site to SC-Voice, but they are two different things.

In the past, SC-Voice was hosted on SuttaCentral’s domain (voice.suttacentral.net), but we have changed that in order to honor Bhante Sujato’s wish to keep SuttaCentral free from AI and have our own domain now (sc-voice.net)

7 Likes

This is a forum dedicated to Early Buddhism and Early Buddhist texts. The recent explosive usage of AI has rapidly impacted humanity in a multitude of ways. Since AI collided with Buddhist texts and Buddhist practice, we’ve had many diverse discussions about the implications that arise and how AI can, should not or cannot be used in this forum. If a post that has something to do with AI is flagged by the community, the moderation staff will review it and determine appropriate action.

There is an entire internet full of forums that welcome general discussions about the uses of AI but on this forum we maintain the focus on Early Buddhism and Early Buddhist texts.

If you’d like to dig deep and discover the implications of AI on Buddhism and on the welfare of humanity, please read the Stochastic Parrots series by Bhante Sujato.

7 Likes

One of the issues these days is that the term “AI” is now used for just about any kind of data processing tool, which is not the way it was used before all of the commercial hype about LLMs (chatbots). SC-Voice is not AI, it’s a text reader. Those types of tools have been around for a long time. Optical character readers aren’t AI either. They’ve been around for a long time. Machine learning tools used to categorize galaxies for astronomers isn’t really AI either, though it’s a related technology because I believe they use neural nets to achieve “fuzzy” pattern matching. There is nothing wrong with using software tools like these to save some time when dealing with massive amounts of data or to automate something like having someone read a sutra aloud.

LLMs have become notorious mainly because they don’t know how to say, “I don’t know.” Instead they write up little essays about topics that are full of made up content that looks vaguely like real content. And that’s what LLMs are designed to do under the hood. They are like autocomplete spellcheckers that generate whole essays instead of single words or phrases. That’s why they routinely go off the reservation when they are asked about something they have little training data about. They just make something up unless guardrails are put on them to stop them from responding. They aren’t intelligent, but they fool people when the content they produce is good.

We really should be more specific in our minds about these different kinds of software. An LLM is what we are usually calling AI. The other tools are not really the issue.

14 Likes

Here, here!

I do have one positive thing to say about AI (LLMs, specifically) is that they do have the potential to bridge gaps between domain specific vocabularies that often serve as barriers for people learning about or engaging in specific subjects - a good example here might be domains like finance and even software engineering that are full of confusing terms or non-standard uses of words that “regular” people are often unfamiliar with, which I suspect may be by design sometimes.

I think once the current bubble pops we’ll be able take a more objective look at LLM usage, especially ones that are significantly more efficient and capable of running entirely on a local device rather than the current resource guzzling behemoths big tech are shoveling their money into and be able to look at them for what they actually are.

2 Likes

But why bother to learn to code, or trade, if the model can do that instead of you and better than you?

New evidence strongly suggests AI is killing jobs for young programmers

It’s a brutal time to be a recent computer science graduate.

New evidence strongly suggests AI is killing jobs for young programmers

1 Like

Right, my point is that I think a positive use of micro-sized LLMs would be help bridge these domain gaps vs. having the models attempt to do the job or task for us.

2 Likes

One of the issues these days is that the term “AI” is now used for just about any kind of data processing tool, which is not the way it was used before

I suspect this is because the earlier applications didn’t closely resemble human behaviour and outputs. They were just “smart” and “very smart” (neural networks) computer tools. With the LLMs, it is very easy to mistake their messages, texts and other outputs for those of humans.

But it says

Artificial Intelligence (AI)

This website has AI content. This website does not use Generative Artificial Intelligence. All AI content on this website was created using only 1-to-1 AI transformations that preserve semantic content without misrepresentation, embellishment or omission.
SC-Voice

Instead they write up little essays about topics that are full of made up content that looks vaguely like real content. And that’s what LLMs are designed to do under the hood.

We are at the very beginning, like, the first minutes of the newborn’s life, of the AI epoch, but they have already learned how to correctly react when they don’t “know” something. I’ve been using the latest GPT for about a month, and have not noticed a single hallucination (in “extended thinking” mode).

Other users also say that:

You can train hallucinations out of the model. You train them at long context agentic workflows and ensure the agent just keeps working if it doesn’t know the answer to figure the answer out. This is what OpenAI is doing with GPT5.2 and it is about 100x more reliable than Gemini at long context agentic workflows as a result

https://www.reddit.com/r/OpenAI/comments/1psge99/comment/nvasiow/

Besides, in order to make up a “topic full of made-up content that looks vaguely real”, one needs to be at least somewhat intelligent. A 14-year-old (even a very intelligent one) can’t make up a lot of content on some lofty issues of quantum electrodynamics or critical theory that would look correct to an adult quantum physics or critical theory expert. An 8-year-old or 3-year-old, even less so. Dogs can’t do that, chimps can’t do that, elephants and dolphins can’t do. Computer systems from 30 years ago, and even from 7 years ago, couldn’t do that. But these LLMs can really have a meaningful conversation with an adult human expert. So they do have something “intelligent” in them…

They aren’t intelligent, but they fool people when the content they produce is good.

If they produce content that is (sometimes) good even for human experts, are they really not intelligent at all?

I don’t know exactly what happens “inside” the LLM’s “brain” – is there some kind of awareness or self-reflexivity or consciousness…? I agree, it can all just be some very complex, high-order mimicking, without any understanding or intelligence. But if they get 2 or 5 or 10 times better at this mimicking, to the extent that they can efficiently do everything a human can do, and more, what would be the point of having this “understanding”, this “awareness” that we think we have (many researchers think that our sense of our sentience/consciousness/intelligence is a tricky illusion), if the machines mimick it so perfectly that you can’t tell the difference? Or that you can tell the difference – the machine is “smarter” than you are? These are, of course, philosophical questions with no immediate answers, but it looks like we will have all reasons to ask ourselves them in 5-10 years…

They are like autocomplete spellcheckers that generate whole essays instead of single words or phrases.

Formidable task! It requires multi-level emulation of the cognitive, emotive, and other systems of an adult human being, and the building of complex models of the world, subject, language, culture, etc., and the various complex relations between them…

1 Like

I have just seen this:

Whether this is true or not, it seems pretty far removed from the early Buddhist texts. What is the relevance of this conversation (or this tweet) to the topic of this forum?

3 Likes

If Buddhism wants to survive and succeed in the modern world, it needs to, at the very least, understand this world.

1 Like

I don’t think the aim of Buddhism is survival in the modern world, but rather understanding dukkha in this world and beyond. In MN 43 this is understanding:

“Reverend, they speak of ‘a witless person’. How is a witless person defined?”
.
“Reverend, they’re called witless because they don’t understand. And what don’t they understand? They don’t understand: ‘This is suffering’ … ‘This is the origin of suffering’ … ‘This is the cessation of suffering’ … ‘This is the practice that leads to the cessation of suffering.’ They’re called witless because they don’t understand.”
.
Saying “Good, reverend,” Mahākoṭṭhita approved and agreed with what Sāriputta said. Then he asked another question:
.
“They speak of ‘a wise person’. How is a wise person defined?”
.
“They’re called wise because they understand. And what do they understand? They understand: ‘This is suffering’ … ‘This is the origin of suffering’ … ‘This is the cessation of suffering’ … ‘This is the practice that leads to the cessation of suffering.’ They’re called wise because they understand.”

The foundation of the Buddhist path is to live and meditate in such a way as to remove defilements of the mind so that one can see clearly. I doubt there are many in the AI industry who have developed their minds in this way enough to understand some of the dangers of AI to humanity.

7 Likes

I seem to remember the opposite like when I took AI at uni it was for like greedy algorithms, graph traversal, some of the statistics in machine learning, symbolic computation like reasoning with prolog, and stuff culture associated with like origins at the MIT AI lab (like Common Lisp systems which often have symbolic systems , programs that make programs and user modified software at run time) which is also where the term hacker and copy left FOSS(1) comes from (but also like background of their engineering funding was also machinery of the military industrial complex)

Now when people say “AI” I assume they mean LLM and generative llms in this incarnation of 3-4 mega corps and general don’t mean that or it’s used in corporate speak ways.

1 Like

What do you mean by “modern world” if your reference is to “the West” then you’re probably right that Buddhism isn’t going to survive there in a meaningful form, much like religion in general if current statistical trends follow their current trajectory.

If you’re talking about countries where Buddhism is already established, how does AI assist in corruption in various Sangha hierarchies or their involvement in politics etc.

How does it help preserve the tipitaka or encourage people to practice?

I can see some time saving uses, much as printing books is faster than inscribing palm leafs, but I don’t see a radical need for developing dependency on AI or for changing Buddhism to fit the modern world.

Buddhism is designed to change us, rather than for us to change it.

2 Likes

When I wrote this, I was thinking about the billionaires and other influential pepole who are driving AI in a way harmful to humanity. I was linking that to how, in general, people make bad decisions because they don’t understand the Four Noble Truths.

A very kind and wise person wrote me a message and reminded me that there are lots of people and meditators who are concerned with the direction AI is heading and have chosen to work in these companies precisely to try to steer things in a better direction.

In my post, I made a blanket statement that didn’t take into account the good people with good intention who see the dangers of AI and use their roles in these companies to make a positve difference.

I realize my words were unjust and I admit my mistake. I made a poor choice of words and didn’t think it through. I am sorry to those I have unfairly characterized or offended.

13 Likes

It’s admirable and inspirational to see people correct themselves in public. Thank you for your fine example. :slight_smile:

6 Likes

It does seem possible that there might be people who either work in the AI industry or are required to use AI at there work place (even used by my dental hygienist to spot cavities!) who wish to engage in the Buddha’s teachings and the world of Buddhism. Not everyone is able to wall themselves off in a monastery.

2 Likes

This goes back to @cdpatton’s point, using AI to spot things like a cavity would usually fall into the category of “machine learning” - and not the current madness surrounding LLMs. Since AI has become the buzzword du jour it’s been applied to all kinds of things.

1 Like