A Survey of Multilingual Models for Automatic Speech Recognition

Hemant Yadav, Sunayana Sitaram


Abstract
Although Automatic Speech Recognition (ASR) systems have achieved human-like performance for a few languages, the majority of the world’s languages do not have usable systems due to the lack of large speech datasets to train these models. Cross-lingual transfer is an attractive solution to this problem, because low-resource languages can potentially benefit from higher-resource languages either through transfer learning, or being jointly trained in the same multilingual model. The problem of cross-lingual transfer has been well studied in ASR, however, recent advances in Self Supervised Learning are opening up avenues for unlabeled speech data to be used in multilingual ASR models, which can pave the way for improved performance on low-resource languages. In this paper, we survey the state of the art in multilingual ASR models that are built with cross-lingual transfer in mind. We present best practices for building multilingual models from research across diverse languages and techniques, discuss open questions and provide recommendations for future work.
Anthology ID:
2022.lrec-1.542
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
5071–5079
Language:
URL:
https://aclanthology.org/2022.lrec-1.542
DOI:
Bibkey:
Cite (ACL):
Hemant Yadav and Sunayana Sitaram. 2022. A Survey of Multilingual Models for Automatic Speech Recognition. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 5071–5079, Marseille, France. European Language Resources Association.
Cite (Informal):
A Survey of Multilingual Models for Automatic Speech Recognition (Yadav & Sitaram, LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.542.pdf