In fact, having poked a little further into this, I’d bump this up to “json-ld is really worth considering” if we want to live up to our moral duties as respectable citizens of the internet (and, of course, reep—or more to the point, offer—the benefit of significantly improved searchability).
Why’s that? 'Cos Tim Berners-Lee said! The specific words he used were:
if you’re responsible – if you know about some data in a government department, often you find that these people, they’re very tempted to keep it – Hans [Rosling] calls it database hugging. You hug your database, you don’t want to let it go until you’ve made a beautiful website for it. Well, I’d like to suggest that rather – yes, make a beautiful website, who am I to say don’t make a beautiful website? Make a beautiful website, but first give us the unadulterated data, we want the data. We want unadulterated data. OK, we have to ask for raw data now.
(The Next Web, Tim Berners-Lee, 2009)
Now, of course, it would be entirely off the mark to call SC a database-hugger—quite the opposite—nevertheless, it does need to take a step or two further in order to contribute to the construction of the Semantic Web in the way that it should. Using json-ld is a pretty straightforward means of doing so and we’ll fare better in search engine as a result.
JSON-LD, you say?
“The JSON-LD data model allows for a richer set of resources, based on the RDF data model … JSON-LD is a concrete RDF syntax as described in RDF11-CONCEPTS. Hence, a JSON-LD document is both an RDF document and a JSON document and correspondingly represents an instance of an RDF data model. However, JSON-LD also extends the RDF data model to optionally allow JSON-LD to serialize generalized RDF Datasets.” (JSON-LD 1.1: A JSON-based Serialization for Linked Data)
Further, certainly with respect to the kind of metadata file outlined in the ticket given in the OP, implementation looks quite easy. It’s perfectly possible to build a SuttaCentral vocabulary and keep the exact value names given in the sample json. That said, however, on the face of it, it would seem a lot more sensible to just use the already existing, extensive and widely used schema.org vocabulary for most things and only self-generated terms for properties that doesn’t already exist elsewhere.
Now, although it stretches the scope of the thread a little (as it goes beyond specifically SC translations), as it’s directly related, it’s also worth noting that there’s no good reason why this shouldn’t be applied to legacy texts too; at the very least in some stripped down version that extracts data given in these files via a few additional classes.
My reading around the subject suggests that implementation here should also be reasonably straightforward; that it would very naturally tie in with the legacy text page upgrade, and further that it would be a better avenue to go down than what’s proposed in Update SC's social sharing information · Issue #1486 · suttacentral/suttacentral · GitHub (although may not resolve the issue at the heart of the ticket as popular social networking sites have their own mark up they like folks to use, too, so we’d have to see the results, but in any case it would be the first step).