R-Spin: Efficient Speaker and Noise-invariant Representation Learning with Acoustic Pieces

Heng-Jui Chang, James Glass


Abstract
This paper introduces Robust Spin (R-Spin), a data-efficient domain-specific self-supervision method for speaker and noise-invariant speech representations by learning discrete acoustic units with speaker-invariant clustering (Spin). R-Spin resolves Spin’s issues and enhances content representations by learning to predict acoustic pieces. R-Spin offers a 12X reduction in computational resources compared to previous state-of-the-art methods while outperforming them in severely distorted speech scenarios. This paper provides detailed analyses to show how discrete units contribute to speech encoder training and improving robustness in diverse acoustic environments.
Anthology ID:
2024.naacl-long.36
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
642–662
Language:
URL:
https://aclanthology.org/2024.naacl-long.36
DOI:
Bibkey:
Cite (ACL):
Heng-Jui Chang and James Glass. 2024. R-Spin: Efficient Speaker and Noise-invariant Representation Learning with Acoustic Pieces. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 642–662, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
R-Spin: Efficient Speaker and Noise-invariant Representation Learning with Acoustic Pieces (Chang & Glass, NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.36.pdf
Copyright:
 2024.naacl-long.36.copyright.pdf