Absolutely fantastic! problem solved!
Then this means that Voice could actually just use the JSON timing map generated by aenas and simply show the segment text according to the timing map. Anagarika @sabbamitta, I have added a new Voice backlog section with items for me to research use of aeneas for auto-segmented replaying of an entire audio recording. I’ve also added an item for @Aminah to design the new settings UI to handle this.
What this will do is allow us to:
- provide a Voice setting for Pali spoken by Bhante Sujato as the default if available and Aditi as the option.
- provide a Voice setting for English spoken by Bhante Sujato as the default if available and all existing AI voices as optional.
@Michaelh, I’m guessing that aeneas mapping files take time to generate, so I will need to figure out where to store the mapping files in Voice. Also, would you generate the mapping files yourself or would expect this to be done by the Voice server?
Bhante @Sujato, just to confirm, we assume you would re-record entire suttas affected by translation changes made post-recording to avoid mismatch of the corrected text segment with its corresponding audio recording.