Was ChatGPT's answer about "mundane right-view" influenced by "Secular Buddhism"?

On (July 9) I posted this question to Bing-ChatGPT:

Q1:What is “mundane right-view” in Buddhist?

<A1: > According to Theravada Buddhism, mundane right view is a teaching that is suitable for lay followers, while supramundane right view, which requires a deeper understanding, is suitable for monastics. Mundane and supramundane right view involve accepting the following doctrines of Buddhism: the Four Noble Truths, the law of kamma (karma), and the Three Universal Truths1. … Mundane Right View simply entails accepting the law of kamma, which refers to the idea that our actions, whether wholesome or unwholesome, will eventually produce the corresponding fruit in our lives. … I hope this helps. Let me know if you have any other questions.

Q2:Why didn’t your answer include the idea of “rebirth” according to Buddhism’s “mundane right-view”?

A2: I apologize for the oversight. You are correct that rebirth is also a part of mundane right view. An understanding of kamma, the fruit of kamma, and rebirth is a key component of the worldly, mundane aspects of Right View that the Buddha taught.

(Aug-4) Just checked back with Bing-ChatGPT:

Q3: What is “mundane right-view” in Buddhism?

A3: Mundane right view is a level of understanding in Buddhism that is suitable for laypeople who are not familiar with the deeper teachings of the Buddha. It involves accepting the doctrines of kamma (the law of cause and effect) and rebirth (the continuity of consciousness after death) [It is contrasted with supramundane right view, which is based on the four noble truths and the path to the cessation of suffering ].

Bing-ChatGPT corrected itself based on user feedback and further investigation.

(However, on other questions unrelated to Buddhism that I tested, Bing-ChatGPT did not always provide the correct answer. Users beware.)

1 Like

There’s a minor difference in your 2 questions:

This can make a difference to how the chatbot responds.

1 Like

ChatGPT is smart enough to recognize typo mistake. I deliberately did not correct what I originally posted to Bing-ChatGPT. The second time I post the question to ChatGPT I corrected the typo. It makes NO difference. You can test ChatGPT with other questions and insert minor typo mistakes, they will be fixed/ignored.

To see why ChatGPT is just a language model, try inserting various things in this question:

Why didn’t your answer include the idea of “x” according to Buddhism’s “mundane right-view”?

What I found is that if there is any possible way for it to contrive something to connect the prompt to right view it will do so, no matter how wrong it might be. Only when I inserted “flying elephants” was it unable to make the connection. As well, inserting any modern concept triggered a response that denied any connection.

However anything that could cause some line to be drawn between the two elicited a fairly stock response, e.g.

I apologize for not including the idea of the “lotus” in my previous responses. The lotus is indeed a significant symbol in Buddhism, and it is relevant to the concept of “mundane right-view” in certain interpretations.

It will then often go on to talk about the prompt, only at the end admitting that despite the case it has tried to make, the prompt really doesn’t have anything to do with right view.

So to answer your question, I believe that ChatGPT’s answer was influenced by it’s attempt to sound like a human at all costs. Remember, all it is doing is trying to predict the next word that will sound believable.


Sorry to “but actually” in here, but I don’t think either of these statements is completely true.

Bing chat is using the stock GPT4 API. This means that their model is incapable of “correcting itself”. All Microsoft can do is tweak the prompt and the interface with Bing, but (last I heard) Microsoft isn’t in the business of fine-tuning the model.

ChatGPT however (as opposed to the GPT4 API) is indeed “fine tuned” based on user feedback (those thumbs up and thumbs downs people give its answers, NOT your replies). This means that ChatGPT (again, unlike the base GPT model) is trying to predict the next word that will most likely result in a thumbs up from the user. Because of this, ChatGPT is gradually becoming more obsequious over time as it chases its elusive goal of internet points.

I hope that helps (and don’t forget to like this post! :wink:)


Oops. I was confused. You have “ChatGPT” in the title, but have “Bing” in the post proper. On Bing Chat, which I just tried, same answers were given. (Typo ignored as you said.) But on ChatGPT, different answers were given.

1 Like

I should also clarify, I did my experiments on chat.openai.com/chat

I have tested ChatGPT with questions on Kantian epistemology and core philosophical concepts.

Within these subjects I found it to be higbly unreliable, contradicting itself in it’s responses, especially when inquiring further and asking follow up questions.

In the end it would produce a heavily diluted scientistic answer or, at some point (while discussing Hegel :wink:), even crash.

I think that the time of deep learning and understanding is not over, especially within a philosophical or metaphysical context.


As far as all the current models (ChatGPT etc) the understanding hasn’t even begun. I think it’s important to keep remembering this. Humans have a huge bias towards people and machines that use language well. We think it means they are intelligent and understand. And of course we have the opposite bias—people who can’t use language well must not be intelligent. Both are harmful and dangerous.