Self-supervised Semantic-driven Phoneme Discovery for Zero-resource Speech Recognition

Liming Wang, Siyuan Feng, Mark Hasegawa-Johnson, Chang Yoo


Abstract
Phonemes are defined by their relationship to words: changing a phoneme changes the word. Learning a phoneme inventory with little supervision has been a longstanding challenge with important applications to under-resourced speech technology. In this paper, we bridge the gap between the linguistic and statistical definition of phonemes and propose a novel neural discrete representation learning model for self-supervised learning of phoneme inventory with raw speech and word labels. Under mild assumptions, we prove that the phoneme inventory learned by our approach converges to the true one with an exponentially low error rate. Moreover, in experiments on TIMIT and Mboshi benchmarks, our approach consistently learns a better phoneme-level representation and achieves a lower error rate in a zero-resource phoneme recognition task than previous state-of-the-art self-supervised representation learning algorithms.
Anthology ID:
2022.acl-long.553
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8027–8047
Language:
URL:
https://aclanthology.org/2022.acl-long.553
DOI:
10.18653/v1/2022.acl-long.553
Bibkey:
Cite (ACL):
Liming Wang, Siyuan Feng, Mark Hasegawa-Johnson, and Chang Yoo. 2022. Self-supervised Semantic-driven Phoneme Discovery for Zero-resource Speech Recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8027–8047, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Self-supervised Semantic-driven Phoneme Discovery for Zero-resource Speech Recognition (Wang et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.553.pdf
Data
LibriSpeech