Not a newcomer, but I wanted to ask: on this forum, it is prohibited to say anything positive about AI ?
Not really. ![]()
SC-Voice on main website uses AI, thereâs at least one AI powered tool of @abuddhistviewâs Vectoral Sutta Finder posted and discussed here, thereâs a recent Vectoral Pattern Matching analysis of suttas right now on front page.
Itâs fair to say with Bhante Sujatoâs AI articles thereâs a certain critical opinion that leads the forum and the community. However, the debate is certainly allowed.
Your celebratory post is not censored, for example. ![]()
Please note that SC-Voice is a separate application. It is based on SuttaCentralâs data for root and translation texts, and there are links from the main SuttaCentral site to SC-Voice, but they are two different things.
In the past, SC-Voice was hosted on SuttaCentralâs domain (voice.suttacentral.net), but we have changed that in order to honor Bhante Sujatoâs wish to keep SuttaCentral free from AI and have our own domain now (sc-voice.net)
This is a forum dedicated to Early Buddhism and Early Buddhist texts. The recent explosive usage of AI has rapidly impacted humanity in a multitude of ways. Since AI collided with Buddhist texts and Buddhist practice, weâve had many diverse discussions about the implications that arise and how AI can, should not or cannot be used in this forum. If a post that has something to do with AI is flagged by the community, the moderation staff will review it and determine appropriate action.
There is an entire internet full of forums that welcome general discussions about the uses of AI but on this forum we maintain the focus on Early Buddhism and Early Buddhist texts.
If youâd like to dig deep and discover the implications of AI on Buddhism and on the welfare of humanity, please read the Stochastic Parrots series by Bhante Sujato.
One of the issues these days is that the term âAIâ is now used for just about any kind of data processing tool, which is not the way it was used before all of the commercial hype about LLMs (chatbots). SC-Voice is not AI, itâs a text reader. Those types of tools have been around for a long time. Optical character readers arenât AI either. Theyâve been around for a long time. Machine learning tools used to categorize galaxies for astronomers isnât really AI either, though itâs a related technology because I believe they use neural nets to achieve âfuzzyâ pattern matching. There is nothing wrong with using software tools like these to save some time when dealing with massive amounts of data or to automate something like having someone read a sutra aloud.
LLMs have become notorious mainly because they donât know how to say, âI donât know.â Instead they write up little essays about topics that are full of made up content that looks vaguely like real content. And thatâs what LLMs are designed to do under the hood. They are like autocomplete spellcheckers that generate whole essays instead of single words or phrases. Thatâs why they routinely go off the reservation when they are asked about something they have little training data about. They just make something up unless guardrails are put on them to stop them from responding. They arenât intelligent, but they fool people when the content they produce is good.
We really should be more specific in our minds about these different kinds of software. An LLM is what we are usually calling AI. The other tools are not really the issue.
Here, here!
I do have one positive thing to say about AI (LLMs, specifically) is that they do have the potential to bridge gaps between domain specific vocabularies that often serve as barriers for people learning about or engaging in specific subjects - a good example here might be domains like finance and even software engineering that are full of confusing terms or non-standard uses of words that âregularâ people are often unfamiliar with, which I suspect may be by design sometimes.
I think once the current bubble pops weâll be able take a more objective look at LLM usage, especially ones that are significantly more efficient and capable of running entirely on a local device rather than the current resource guzzling behemoths big tech are shoveling their money into and be able to look at them for what they actually are.
But why bother to learn to code, or trade, if the model can do that instead of you and better than you?
New evidence strongly suggests AI is killing jobs for young programmers
Itâs a brutal time to be a recent computer science graduate.
New evidence strongly suggests AI is killing jobs for young programmers
Right, my point is that I think a positive use of micro-sized LLMs would be help bridge these domain gaps vs. having the models attempt to do the job or task for us.
One of the issues these days is that the term âAIâ is now used for just about any kind of data processing tool, which is not the way it was used before
I suspect this is because the earlier applications didnât closely resemble human behaviour and outputs. They were just âsmartâ and âvery smartâ (neural networks) computer tools. With the LLMs, it is very easy to mistake their messages, texts and other outputs for those of humans.
But it says
Artificial Intelligence (AI)
This website has AI content. This website does not use Generative Artificial Intelligence. All AI content on this website was created using only 1-to-1 AI transformations that preserve semantic content without misrepresentation, embellishment or omission.
SC-Voice
Instead they write up little essays about topics that are full of made up content that looks vaguely like real content. And thatâs what LLMs are designed to do under the hood.
We are at the very beginning, like, the first minutes of the newbornâs life, of the AI epoch, but they have already learned how to correctly react when they donât âknowâ something. Iâve been using the latest GPT for about a month, and have not noticed a single hallucination (in âextended thinkingâ mode).
Other users also say that:
You can train hallucinations out of the model. You train them at long context agentic workflows and ensure the agent just keeps working if it doesnât know the answer to figure the answer out. This is what OpenAI is doing with GPT5.2 and it is about 100x more reliable than Gemini at long context agentic workflows as a result
https://www.reddit.com/r/OpenAI/comments/1psge99/comment/nvasiow/
Besides, in order to make up a âtopic full of made-up content that looks vaguely realâ, one needs to be at least somewhat intelligent. A 14-year-old (even a very intelligent one) canât make up a lot of content on some lofty issues of quantum electrodynamics or critical theory that would look correct to an adult quantum physics or critical theory expert. An 8-year-old or 3-year-old, even less so. Dogs canât do that, chimps canât do that, elephants and dolphins canât do. Computer systems from 30 years ago, and even from 7 years ago, couldnât do that. But these LLMs can really have a meaningful conversation with an adult human expert. So they do have something âintelligentâ in themâŚ
They arenât intelligent, but they fool people when the content they produce is good.
If they produce content that is (sometimes) good even for human experts, are they really not intelligent at all?
I donât know exactly what happens âinsideâ the LLMâs âbrainâ â is there some kind of awareness or self-reflexivity or consciousnessâŚ? I agree, it can all just be some very complex, high-order mimicking, without any understanding or intelligence. But if they get 2 or 5 or 10 times better at this mimicking, to the extent that they can efficiently do everything a human can do, and more, what would be the point of having this âunderstandingâ, this âawarenessâ that we think we have (many researchers think that our sense of our sentience/consciousness/intelligence is a tricky illusion), if the machines mimick it so perfectly that you canât tell the difference? Or that you can tell the difference â the machine is âsmarterâ than you are? These are, of course, philosophical questions with no immediate answers, but it looks like we will have all reasons to ask ourselves them in 5-10 yearsâŚ
They are like autocomplete spellcheckers that generate whole essays instead of single words or phrases.
Formidable task! It requires multi-level emulation of the cognitive, emotive, and other systems of an adult human being, and the building of complex models of the world, subject, language, culture, etc., and the various complex relations between themâŚ
Whether this is true or not, it seems pretty far removed from the early Buddhist texts. What is the relevance of this conversation (or this tweet) to the topic of this forum?
If Buddhism wants to survive and succeed in the modern world, it needs to, at the very least, understand this world.
I donât think the aim of Buddhism is survival in the modern world, but rather understanding dukkha in this world and beyond. In MN 43 this is understanding:
âReverend, they speak of âa witless personâ. How is a witless person defined?â
.
âReverend, theyâre called witless because they donât understand. And what donât they understand? They donât understand: âThis is sufferingâ ⌠âThis is the origin of sufferingâ ⌠âThis is the cessation of sufferingâ ⌠âThis is the practice that leads to the cessation of suffering.â Theyâre called witless because they donât understand.â
.
Saying âGood, reverend,â MahÄkoášášhita approved and agreed with what SÄriputta said. Then he asked another question:
.
âThey speak of âa wise personâ. How is a wise person defined?â
.
âTheyâre called wise because they understand. And what do they understand? They understand: âThis is sufferingâ ⌠âThis is the origin of sufferingâ ⌠âThis is the cessation of sufferingâ ⌠âThis is the practice that leads to the cessation of suffering.â Theyâre called wise because they understand.â
The foundation of the Buddhist path is to live and meditate in such a way as to remove defilements of the mind so that one can see clearly. I doubt there are many in the AI industry who have developed their minds in this way enough to understand some of the dangers of AI to humanity.
I seem to remember the opposite like when I took AI at uni it was for like greedy algorithms, graph traversal, some of the statistics in machine learning, symbolic computation like reasoning with prolog, and stuff culture associated with like origins at the MIT AI lab (like Common Lisp systems which often have symbolic systems , programs that make programs and user modified software at run time) which is also where the term hacker and copy left FOSS(1) comes from (but also like background of their engineering funding was also machinery of the military industrial complex)
Now when people say âAIâ I assume they mean LLM and generative llms in this incarnation of 3-4 mega corps and general donât mean that or itâs used in corporate speak ways.
What do you mean by âmodern worldâ if your reference is to âthe Westâ then youâre probably right that Buddhism isnât going to survive there in a meaningful form, much like religion in general if current statistical trends follow their current trajectory.
If youâre talking about countries where Buddhism is already established, how does AI assist in corruption in various Sangha hierarchies or their involvement in politics etc.
How does it help preserve the tipitaka or encourage people to practice?
I can see some time saving uses, much as printing books is faster than inscribing palm leafs, but I donât see a radical need for developing dependency on AI or for changing Buddhism to fit the modern world.
Buddhism is designed to change us, rather than for us to change it.
When I wrote this, I was thinking about the billionaires and other influential pepole who are driving AI in a way harmful to humanity. I was linking that to how, in general, people make bad decisions because they donât understand the Four Noble Truths.
A very kind and wise person wrote me a message and reminded me that there are lots of people and meditators who are concerned with the direction AI is heading and have chosen to work in these companies precisely to try to steer things in a better direction.
In my post, I made a blanket statement that didnât take into account the good people with good intention who see the dangers of AI and use their roles in these companies to make a positve difference.
I realize my words were unjust and I admit my mistake. I made a poor choice of words and didnât think it through. I am sorry to those I have unfairly characterized or offended.
Itâs admirable and inspirational to see people correct themselves in public. Thank you for your fine example. ![]()
It does seem possible that there might be people who either work in the AI industry or are required to use AI at there work place (even used by my dental hygienist to spot cavities!) who wish to engage in the Buddhaâs teachings and the world of Buddhism. Not everyone is able to wall themselves off in a monastery.
This goes back to @cdpattonâs point, using AI to spot things like a cavity would usually fall into the category of âmachine learningâ - and not the current madness surrounding LLMs. Since AI has become the buzzword du jour itâs been applied to all kinds of things.
