I just read Alex Wynne’s article on the work he is leading for creating a critical edition of the Pali canon. Here’s the article for those interested, it’s well worth a look.
In Wynne’s article, he speaks of the great multiplicity of manuscripts consulted, and discusses the principles underlying the development of a critical edition. Most of these are sensible enough; prefer explicitly attested forms to inferred ones, prefer forms with multiple attestation, prefer Middle Indic to Sanskritic, and so on. The aim is to construct something similar to that used by Buddhaghosa.
I will leave aside the political dimensions of the work; suffice to say, the organization sponsoring this has been recently embroiled in a billion dollar financial scandal. To think they’ll successfully shepherd this project through to it’s projected completion date of 2027 is optimistic at best, especially given the current Thai political climate, and aging leadership of the organization.
Like any large-scale project, there are problems. There are very few scholars truly competent to make such judgment calls. The standards are inevitably flexible and require interpretation. A different approach might develop a different critical edition, one that was no more or less authoritative.
That’s not to say that there’s anything wrong with their approach; as long as it’s clear, sensible, and consistent it will work fine. It’s just that this method does not produce something that’s somehow objectively better, it’s just one particular approach, a perfectly fine and sensible one to be sure, but still, just one. And due to the monolithic nature of the project, if you wanted to do another critical edition based on different principles, you’d probably have to start again from scratch.
And it is an incredible amount of work, all the considerations and reconciliations that have to go into each and every phrase. This is inherent in the very idea of a critical edition: there has to be one mainline text, and that has to be justified.
But does there have to be a mainline text? It’s an idea rooted in the technologies and philosophies of the 19th century: does it still have a place? Is all that work really going to accomplish anything? I must admit, I have my doubts.
Consider an alternative approach. Throw out the whole idea of editing a critical edition. Instead, implement a three step process:
- Type up the manuscripts.
- Put them on github.
- Enjoy a nice cup of coffee, and catch up on some meditation.
Github, or git in general, is a version-control system, whose fundamental purpose is to record differences between things. It is used for, well, just about all software development these days, and it really is pretty amazing. It allows you to instantly visualize or list differences between text in multiple different ways. If you’re not familiar with it, here’s a very simple example. This is a change made to one of our Indonesian texts, just a spelling correction.
As you can see, the text is there, with the difference highlighted, time-stamped, with author and comment: what more do you need? It’s simple, fast, universal, and super reliable.
The texts put into git can be very simple. Plain text, with a number to mark the manuscript boundaries. Maybe some minimal markup, if headings or whatever are found in the manuscripts. The parallel passages in the different editions can be kept in sync just by keeping one sutta per file and matching the line breaks.
So basically all you need is good typing. While this is not a trivial task, it doesn’t require anything like the level of scholarly proficiency of creating proper critical editions. It’s certainly quite achievable; with our very limited resources, we have sponsored several successful projects for SC. There’s no limit to the number of manuscripts, they can be just typed and added indefinitely.
One great advantage of this approach is that it’s immediate. Type it up and it’s available. Unlike a monolithic, controlled, top-down project, you don’t have to wait till the end to see if anything’s any good. And because the typed version is always matched with a scanned image of the manuscript, you can always check it, and improve it if necessary.
In this approach, when there’s a dubious reading, you don’t have a pre-digested answer. Good: I don’t want one. What I want is the textual evidence, and some advice and discussion by experts to guide me. But at the end of the day, I’ll make up my own mind, thanks. So again, rather than a unified, concentrated, closed body where decisions are made, it is open and flowing, allowing for comments and discussion. The only thing that’s sacred is the integrity of the text itself, not the scholarly interpretation of it.
Once text is on github, it can be easily forked and applied in all kinds of different ways. These days, to make a text application is not that hard. By separating out the basic content from the application, it makes the whole process a lot smoother. You can always just read stuff on github itself if you like, just as you can today with SC’s text. But if someone wants to use it for search, or for multiple variant display, or as a iPhone app, or whatever, they just pull the same data content and apply it.
With a significant number of texts available like this, it becomes possible to analyze the forms statistically and compute relations between texts. Currently this would still be hard, but with the rapid evolution of AI it will become trivial in the near future. So you could do work like, say, the recent discoveries in Shakespearean scholarship, which have clarified Marlowe’s contributions at long last.
None of this is possible until we have multiple open, reliable textual witnesses. We have the technology: do we have the people to do the work?