Automatic Restoration of Diacritics for Speech Data Sets

Sara Shatnawi, Sawsan Alqahtani, Hanan Aldarmaki


Abstract
Automatic text-based diacritic restoration models generally have high diacritic error rates when applied to speech transcripts as a result of domain and style shifts in spoken language. In this work, we explore the possibility of improving the performance of automatic diacritic restoration when applied to speech data by utilizing parallel spoken utterances. In particular, we use the pre-trained Whisper ASR model fine-tuned on relatively small amounts of diacritized Arabic speech data to produce rough diacritized transcripts for the speech utterances, which we then use as an additional input for diacritic restoration models. The proposed framework consistently improves diacritic restoration performance compared to text-only baselines. Our results highlight the inadequacy of current text-based diacritic restoration models for speech data sets and provide a new baseline for speech-based diacritic restoration.
Anthology ID:
2024.naacl-long.233
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4166–4176
Language:
URL:
https://aclanthology.org/2024.naacl-long.233
DOI:
Bibkey:
Cite (ACL):
Sara Shatnawi, Sawsan Alqahtani, and Hanan Aldarmaki. 2024. Automatic Restoration of Diacritics for Speech Data Sets. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 4166–4176, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Automatic Restoration of Diacritics for Speech Data Sets (Shatnawi et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.233.pdf
Copyright:
 2024.naacl-long.233.copyright.pdf