Optimized Tokenization for Transcribed Error Correction

Tomer Wullach, Shlomo Chazan


Abstract
The challenges facing speech recognition systems, such as variations in pronunciations, adverse audio conditions, and the scarcity of labeled data, emphasize the necessity for a post-processing step that corrects recurring errors. Previous research has shown the advantages of employing dedicated error correction models, yet training such models requires large amounts of labeled data which is not easily obtained. To overcome this limitation, synthetic transcribed-like data is often utilized, however, bridging the distribution gap between transcribed errors and synthetic noise is not trivial. In this paper, we demonstrate that the performance of correction models can be significantly increased by training solely using synthetic data. Specifically, we empirically show that: (1) synthetic data generated using the error distribution derived from a set of transcribed data outperforms the common approach of applying random perturbations; (2) applying language-specific adjustments to the vocabulary of a BPE tokenizer strike a balance between adapting to unseen distributions and retaining knowledge of transcribed errors. We showcase the benefits of these key observations, and evaluate our approach using multiple languages, speech recognition systems and prominent speech recognition datasets.
Anthology ID:
2023.emnlp-main.802
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12988–12997
Language:
URL:
https://aclanthology.org/2023.emnlp-main.802
DOI:
10.18653/v1/2023.emnlp-main.802
Bibkey:
Cite (ACL):
Tomer Wullach and Shlomo Chazan. 2023. Optimized Tokenization for Transcribed Error Correction. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12988–12997, Singapore. Association for Computational Linguistics.
Cite (Informal):
Optimized Tokenization for Transcribed Error Correction (Wullach & Chazan, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.802.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.802.mp4