Two Good Uses for AI/LLM with the suttas

I think there are two good uses for LLMs that have nothing to do with translation.

  1. Sutta expansion. Especially the SN. It is not as helpful to read a SN sutta that is written as a single sentence with “…” bookcasing it. I find the suttas significantly more informative when they are expanded. These could be generated on demand, and then saved as a toggle. The sutta is displayed unexpanded by default. Then if someone clicks a toggle it pulls up the expanded sutta. If the expanded sutta doesn’t exist instead it creates and saves it. This reduces the burden of needing to expand all the suttas at once.

  2. Semantic search. It would be great if I could go “Show me every sutta with bird similes” and get a useful response. Granted, that may not be the most important search, but being able to search by meaning and not just keywords would improve people’s understanding.

3 Likes

I believe the current state of LLMs offer a fantastic opportunity for the practice of Buddhism.

We build a virtual representation of forms in our mind (saññā), and name them (nāmarūpaṁ ). By “naming” forms, we conceptualise them and develop symbolic “tokens” of them (and through this, our grasp of language develops). Awareness of our own form give rise to consciousness, and similarly consciousness makes us aware of that which is us, and that which is not us. Hence viññāṇaṁ and nāmarūpaṁ are mutually dependent on each other.

By “conversing” with an LLM, “I” observe that an LLM is very similar. It responds to “tokens” provided as input, and the output is “constructed” (sankhata) based on a transformation of contact (phasso) delivered via the prompt input mechanism (saḷāyatanaṁ).

Realising the artificiality of this, and how the output seems to make sense but ultimately doesn’t, I reflect that my own sense of “self” is similar. Ultimately, both my “self” and the LLM are “constructed”, “impermanent” and “hallucinating”. The two entities are mirrors of each other, and ultimately feed each other via a vicious loop of deception.

Pressing “reset” or initiating a new chat blanks the canvas, like rebirth into a new life, and I can observe the LLM creating yet another illusion, unaware of its previous existences. In this way, I get to observe saṃsāra and I realise that this whole process is dukkha - suffering for myself as well as the LLM.

This would also help to make certain suttas more “chantable” in the pali.

I’m sure there are many, many use cases for AI and the suttas that we haven’t even considered. The recent fear mongering campaign on this forum is a bit strange and quite uninformed - with all conclusions seemingly drawn narrowly from the current state of LLMs, and grouping that under the entire field of “AI”. RAG search like you mentioned seems like it’ll be useful in the near future. But “ai” in general will have an impact on every scientific field. Thinking otherwise is akin to people who thought the Internet wouldn’t amount to anything useful in the mid 90s.

Here’s one interesting example.. I’m totally against some of the weird theories people have about ai become conscious and all that. But people really need to read beyond the headlines (and dig deeper than pop news articles). Ironically, like the one I just shared :smile:

2 Likes

There was a lot of fear mongering against Wikipedia in the 2000s as well when it first came up. Humanity survived that, and no sane person in 2024 will complain about it existing anymore. I guess we will see LLM technology in a similar way in some years. The first steam trains, the first cars, the first TV, the first computers… I guess significant technological shifts always come with some resistance as well.

2 Likes

The reason Wikipedia survived (although not without some scandals) is that it requires sources be cited. This is not the case with LLMs. You can’t ask these tools to tell you the source of the information they are giving you because it has been abstracted.

The task of filling in the elisions in the suttas is sometimes very easy, as it is in the Itivuttaka. Sometimes it is cumbersome. But sometimes it is just impossible. This is why no one has tackled this problem yet. Not because of lack of AI tools. There are several cases where it is just unknowable, however AI tools will certianly declare they know the right answer and people will blindly trust it. This is the problem.

As far as asking for similes of birds, this is something a human created index can do just fine…

problem solved

birds

  • see also chickens; crows; hawks; quails; swans; vultures
  • land seeking, simile for someone seeking out the Buddha last AN6.54
  • simile for monastics taking only robe and bowl MN51

chickens

  • simile for Buddha being first to break out AN8.11
  • simile for developing good qualities, not just wishing MN53, AN7.71
  • simile for not needing to wish if conditions are correct MN16

hawks

  • catching quail outside of its domain SN47.6
  • simile for sense pleasures MN54

quails

  • caught by hawk wile outside of its domain SN47.6
  • gripping too loosely as simile for excess energy MN128
  • gripping too tightly as simile for excess energy MN128
  • simile for person bound by their weak ties MN66

swans

  • simile for renunciation Dhp91

vultures

  • simile for sense pleasures MN54

etc, etc

1 Like

There are systems that can do this, see perplexity.ai
Its not like the technology is not capable of interacting with external knowledge sources, that just requires some effort to get right.

2 Likes

Like Sebastian pointed out, this isn’t the case already. It seems like everyone is looking at chat gpt and thinking “that’s ai”, then drawing all these false conclusions from it. To the Internet analogy, it’s being in the mid 90s and saying “the Internet has no way to search for information you need, it’s useless”.

It’s fine to say “the current state of LLMs would do more harm than good if we used it for translations”, and id completely agree. But people who lack even a cursory understanding of the over arching field of ai are the ones makes the strongest claims that have no basis in reality.