Improving Acoustic Word Embeddings through Correspondence Training of Self-supervised Speech Representations

Amit Meghanani, Thomas Hain


Abstract
Acoustic word embeddings (AWEs) are vector representations of spoken words. An effective method for obtaining AWEs is the Correspondence Auto-Encoder (CAE). In the past, the CAE method has been associated with traditional MFCC features. Representations obtained from self-supervised learning (SSL)-based speech models such as HuBERT, Wav2vec2, etc., are outperforming MFCC in many downstream tasks. However, they have not been well studied in the context of learning AWEs. This work explores the effectiveness of CAE with SSL-based speech representations to obtain improved AWEs. Additionally, the capabilities of SSL-based speech models are explored in cross-lingual scenarios for obtaining AWEs. Experiments are conducted on five languages: Polish, Portuguese, Spanish, French, and English. HuBERT-based CAE model achieves the best results for word discrimination in all languages, despite HuBERT being pre-trained on English only. Also, the HuBERT-based CAE model works well in cross-lingual settings. It outperforms MFCC-based CAE models trained on the target languages when trained on one source language and tested on target languages.
Anthology ID:
2024.eacl-long.118
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1959–1967
Language:
URL:
https://aclanthology.org/2024.eacl-long.118
DOI:
Bibkey:
Cite (ACL):
Amit Meghanani and Thomas Hain. 2024. Improving Acoustic Word Embeddings through Correspondence Training of Self-supervised Speech Representations. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959–1967, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Improving Acoustic Word Embeddings through Correspondence Training of Self-supervised Speech Representations (Meghanani & Hain, EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.118.pdf
Note:
 2024.eacl-long.118.note.zip