Meta-Transfer Learning for Code-Switched Speech Recognition

Genta Indra Winata, Samuel Cahyawijaya, Zhaojiang Lin, Zihan Liu, Peng Xu, Pascale Fung


Abstract
An increasing number of people in the world today speak a mixed-language as a result of being multilingual. However, building a speech recognition system for code-switching remains difficult due to the availability of limited resources and the expense and significant effort required to collect mixed-language data. We therefore propose a new learning method, meta-transfer learning, to transfer learn on a code-switched speech recognition system in a low-resource setting by judiciously extracting information from high-resource monolingual datasets. Our model learns to recognize individual languages, and transfer them so as to better recognize mixed-language speech by conditioning the optimization on the code-switching data. Based on experimental results, our model outperforms existing baselines on speech recognition and language modeling tasks, and is faster to converge.
Anthology ID:
2020.acl-main.348
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3770–3776
Language:
URL:
https://aclanthology.org/2020.acl-main.348
DOI:
10.18653/v1/2020.acl-main.348
Bibkey:
Cite (ACL):
Genta Indra Winata, Samuel Cahyawijaya, Zhaojiang Lin, Zihan Liu, Peng Xu, and Pascale Fung. 2020. Meta-Transfer Learning for Code-Switched Speech Recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3770–3776, Online. Association for Computational Linguistics.
Cite (Informal):
Meta-Transfer Learning for Code-Switched Speech Recognition (Winata et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.348.pdf
Video:
 http://slideslive.com/38928739
Code
 audioku/meta-transfer-learning