Quantifying Language Variation Acoustically with Few Resources

Martijn Bartelds, Martijn Wieling


Abstract
Deep acoustic models represent linguistic information based on massive amounts of data. Unfortunately, for regional languages and dialects such resources are mostly not available. However, deep acoustic models might have learned linguistic information that transfers to low-resource languages. In this study, we evaluate whether this is the case through the task of distinguishing low-resource (Dutch) regional varieties. By extracting embeddings from the hidden layers of various wav2vec 2.0 models (including new models which are pre-trained and/or fine-tuned on Dutch) and using dynamic time warping, we compute pairwise pronunciation differences averaged over 10 words for over 100 individual dialects from four (regional) languages. We then cluster the resulting difference matrix in four groups and compare these to a gold standard, and a partitioning on the basis of comparing phonetic transcriptions. Our results show that acoustic models outperform the (traditional) transcription-based approach without requiring phonetic transcriptions, with the best performance achieved by the multilingual XLSR-53 model fine-tuned on Dutch. On the basis of only six seconds of speech, the resulting clustering closely matches the gold standard.
Anthology ID:
2022.naacl-main.273
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3735–3741
Language:
URL:
https://aclanthology.org/2022.naacl-main.273
DOI:
10.18653/v1/2022.naacl-main.273
Bibkey:
Cite (ACL):
Martijn Bartelds and Martijn Wieling. 2022. Quantifying Language Variation Acoustically with Few Resources. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3735–3741, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Quantifying Language Variation Acoustically with Few Resources (Bartelds & Wieling, NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.273.pdf
Code
 bartelds/language-variation
Data
LibriSpeech