Learning Robust and Multilingual Speech Representations

Kazuya Kawakami, Luyu Wang, Chris Dyer, Phil Blunsom, Aaron van den Oord


Abstract
Unsupervised speech representation learning has shown remarkable success at finding representations that correlate with phonetic structures and improve downstream speech recognition performance. However, most research has been focused on evaluating the representations in terms of their ability to improve the performance of speech recognition systems on read English (e.g. Wall Street Journal and LibriSpeech). This evaluation methodology overlooks two important desiderata that speech representations should have: robustness to domain shifts and transferability to other languages. In this paper we learn representations from up to 8000 hours of diverse and noisy speech data and evaluate the representations by looking at their robustness to domain shifts and their ability to improve recognition performance in many languages. We find that our representations confer significant robustness advantages to the resulting recognition systems: we see significant improvements in out-of-domain transfer relative to baseline feature sets and the features likewise provide improvements in 25 phonetically diverse languages.
Anthology ID:
2020.findings-emnlp.106
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1182–1192
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.106
DOI:
10.18653/v1/2020.findings-emnlp.106
Bibkey:
Cite (ACL):
Kazuya Kawakami, Luyu Wang, Chris Dyer, Phil Blunsom, and Aaron van den Oord. 2020. Learning Robust and Multilingual Speech Representations. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1182–1192, Online. Association for Computational Linguistics.
Cite (Informal):
Learning Robust and Multilingual Speech Representations (Kawakami et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.106.pdf
Data
AVSpeechAudioSetLibriSpeech