Minh-Châu Nguyên
2022
Fine-tuning pre-trained models for Automatic Speech Recognition, experiments on a fieldwork corpus of Japhug (Trans-Himalayan family)
Séverine Guillaume
|
Guillaume Wisniewski
|
Cécile Macaire
|
Guillaume Jacques
|
Alexis Michaud
|
Benjamin Galliot
|
Maximin Coavoux
|
Solange Rossato
|
Minh-Châu Nguyên
|
Maxime Fily
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
This is a report on results obtained in the development of speech recognition tools intended to support linguistic documentation efforts. The test case is an extensive fieldwork corpus of Japhug, an endangered language of the Trans-Himalayan (Sino-Tibetan) family. The goal is to reduce the transcription workload of field linguists. The method used is a deep learning approach based on the language-specific tuning of a generic pre-trained representation model, XLS-R, using a Transformer architecture. We note difficulties in implementation, in terms of learning stability. But this approach brings significant improvements nonetheless. The quality of phonemic transcription is improved over earlier experiments; and most significantly, the new approach allows for reaching the stage of automatic word recognition. Subjective evaluation of the tool by the author of the training data confirms the usefulness of this approach.
Search
Co-authors
- Séverine Guillaume 1
- Guillaume Wisniewski 1
- Cécile Macaire 1
- Guillaume Jacques 1
- Alexis Michaud 1
- show all...