As the debate over AI and Buddhism continues, I’d like to offer this little gem. Here you go … (from my conversation with Chat GPT) - note: blocks of text written by AI used here to emphasize how useless AI is.
In which sutta did the Buddha say that not even one ascetic in one hundred was reborn in heaven?
The sutta you’re referring to is the “Dīghanakha Sutta” (MN 74) in the Majjhima Nikāya. In this discourse, the Buddha engages in a conversation with the ascetic Dīghanakha. During their discussion, the Buddha mentions that not even one in a hundred ascetics, who hold certain wrong views, is reborn in a heavenly realm. This highlights the importance of right view and correct understanding in the path to liberation.
Would you like more information or a summary of the Dīghanakha Sutta?
What verse says the above?
The specific passage where the Buddha makes a statement about the rarity of ascetics being reborn in heaven is not explicitly found in the “Dīghanakha Sutta” (MN 74). However, the sentiment expressed is similar to the Buddha’s teachings on the importance of right view and the difficulties of attaining favorable rebirth without proper understanding and practice.
For precise references, it might be beneficial to look into suttas discussing the outcomes of wrong views and actions, such as in the “Cūḷa-kammavibhanga Sutta” (MN 135), where the Buddha details how different actions
Umm … does anyone know of the sutta which mentions the rarity of ascetics from other religions entering into heaven - explicitly and actually - what the heck ChatGPT???
MN71
“Mister Gotama, are there any Ājīvaka ascetics who make an end of suffering when the body breaks up?” “No, Vaccha.” “But are there any Ājīvaka ascetics who go to heaven when the body breaks up?” “Vaccha, when I recollect the past ninety-one eons, I can’t find any Ājīvaka ascetics who have gone to heaven, except one; and he taught the efficacy of deeds and action.”
Edit to add: ChatGPT is not a search engine or a source of information or knowledge. It is a large language model. Even the most pro-AI folks don’t claim that it’s good at answering Dhamma questions like this, and especially not good at giving citations.
2 Likes
Do the designers not train the models to admit when they don’t have an answer? Can’t program humility I suppose.
Edit to add: I find it useful in providing calculations to physics questions. That appears to be knowledge of sorts
Thank you for the reference btw. I’ve been thinking about that one off an on for over a year.
Chat GPT is not a being - the wrong speech simply does not apply to a tool. Chat GPT just produced the wrong answer to your request - it produced the wrong output for your input. Nothing more, nothing less.
2 Likes
It literally told me the explicit statement was to be found in MN 74 and then, when questioned further, told the opposite.
I’m not implying machines have morals. I’m saying it’s a hilarious feature of this LLM that it has no problem producing “fake news”.
What exactly is it learning, such that it can blatantly supply wrong information and lead people to believe one thing when it’s completely false.
Again, not imputing morals on machines - just pointing to a considerable flaw in the design. Especially when we rely on computers to be accurate, it’s funny how easily it supplies blatantly wrong information.
As someone already mentioned, this LLM is not using the same data base as those programs which supply translations. However, if CHATGPT is any indication of the accuracy of information supplied by “AI” - then this is a huge failure.
This is kind of old news. Haven’t you been following numerous discussions here?
ChatGpt is trying to sound human. It’s not trying to be accurate. The problem is that people expect that something that sounds so believably human to also be accurate.
3 Likes
AI is a tool, and like any tool, especially one as advanced and complicated as AI, to be useful and effective you have to first learn how to use it. Have you ever heard of Prompt Engineering? It’s a new professional skill in high demand.
It seems obvious to me that any AI will have to be implemented with certain guard rails to prevent it from producing offensive or politically incorrect statements. These always present a human bias. In the case of the west: A joyful blend of judeo-christian values, neo-liberalism and positive sciences.
Since these are (possibly) not reconcilable with right view, I think that right view should not be expected from any such application.