2024
pdf
bib
abs
Predicting positive transfer for improved low-resource speech recognition using acoustic pseudo-tokens
Nay San
|
Georgios Paraskevopoulos
|
Aryaman Arora
|
Xiluo He
|
Prabhjot Kaur
|
Oliver Adams
|
Dan Jurafsky
Proceedings of the 6th Workshop on Research in Computational Linguistic Typology and Multilingual NLP
While massively multilingual speech models like wav2vec 2.0 XLSR-128 can be directly fine-tuned for automatic speech recognition (ASR), downstream performance can still be relatively poor on languages that are under-represented in the pre-training data. Continued pre-training on 70–200 hours of untranscribed speech in these languages can help — but what about languages without that much recorded data? For such cases, we show that supplementing the target language with data from a similar, higher-resource ‘donor’ language can help. For example, continued pretraining on only 10 hours of low-resource Punjabi supplemented with 60 hours of donor Hindi is almost as good as continued pretraining on 70 hours of Punjabi. By contrast, sourcing supplemental data from less similar donors like Bengali does not improve ASR performance. To inform donor language selection, we propose a novel similarity metric based on the sequence distribution of induced acoustic units: the Acoustic Token Distribution Similarity (ATDS). Across a set of typologically different target languages (Punjabi, Galician, Iban, Setswana), we show that the ATDS between the target language and its candidate donors precisely predicts target language ASR performance.
2023
pdf
bib
abs
Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation
Martijn Bartelds
|
Nay San
|
Bradley McDonnell
|
Dan Jurafsky
|
Martijn Wieling
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The performance of automatic speech recognition (ASR) systems has advanced substantially in recent years, particularly for languages for which a large amount of transcribed speech is available. Unfortunately, for low-resource languages, such as minority languages, regional languages or dialects, ASR performance generally remains much lower. In this study, we investigate whether data augmentation techniques could help improve low-resource ASR performance, focusing on four typologically diverse minority languages or language variants (West Germanic: Gronings, West-Frisian; Malayo-Polynesian: Besemah, Nasal). For all four languages, we examine the use of self-training, where an ASR system trained with the available human-transcribed data is used to generate transcriptions, which are then combined with the original data to train a new ASR system. For Gronings, for which there was a pre-existing text-to-speech (TTS) system available, we also examined the use of TTS to generate ASR training data from text-only sources. We find that using a self-training approach consistently yields improved performance (a relative WER reduction up to 20.5% compared to using an ASR system trained on 24 minutes of manually transcribed speech). The performance gain from TTS augmentation for Gronings was even stronger (up to 25.5% relative reduction in WER compared to a system based on 24 minutes of manually transcribed speech). In sum, our results show the benefit of using self-training or (if possible) TTS-generated data as an efficient solution to overcome the limitations of data availability for resource-scarce languages in order to improve ASR performance.
pdf
bib
Leveraging supplementary text data to kick-start automatic speech recognition system development with limited transcriptions
Nay San
|
Martijn Bartelds
|
Blaine Billings
|
Ella de Falco
|
Hendi Feriza
|
Johan Safri
|
Wawan Sahrozi
|
Ben Foley
|
Bradley McDonnell
|
Dan Jurafsky
Proceedings of the Sixth Workshop on the Use of Computational Methods in the Study of Endangered Languages
2022
pdf
bib
abs
Automated speech tools for helping communities process restricted-access corpora for language revival efforts
Nay San
|
Martijn Bartelds
|
Tolulope Ogunremi
|
Alison Mount
|
Ruben Thompson
|
Michael Higgins
|
Roy Barker
|
Jane Simpson
|
Dan Jurafsky
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
Many archival recordings of speech from endangered languages remain unannotated and inaccessible to community members and language learning programs. One bottleneck is the time-intensive nature of annotation. An even narrower bottleneck occurs for recordings with access constraints, such as language that must be vetted or filtered by authorised community members before annotation can begin. We propose a privacy-preserving workflow to widen both bottlenecks for recordings where speech in the endangered language is intermixed with a more widely-used language such as English for meta-linguistic commentary and questions (e.g.What is the word for ‘tree’?). We integrate voice activity detection (VAD), spoken language identification (SLI), and automatic speech recognition (ASR) to transcribe the metalinguistic content, which an authorised person can quickly scan to triage recordings that can be annotated by people with lower levels of access. We report work-in-progress processing 136 hours archival audio containing a mix of English and Muruwari. Our collaborative work with the Muruwari custodian of the archival materials show that this workflow reduces metalanguage transcription time by 20% even given only minimal amounts of annotated training data, 10 utterances per language for SLI and for ASR at most 39 minutes, and possibly as little as 39 seconds.
2019
pdf
bib
Future Directions in Technological Support for Language Documentation
Daan van Esch
|
Ben Foley
|
Nay San
Proceedings of the 3rd Workshop on the Use of Computational Methods in the Study of Endangered Languages Volume 1 (Papers)