Improving Arabic Diacritization by Learning to Diacritize and Translate

Brian Thompson, Ali Alshehri


Abstract
We propose a novel multitask learning method for diacritization which trains a model to both diacritize and translate. Our method addresses data sparsity by exploiting large, readily available bitext corpora. Furthermore, translation requires implicit linguistic and semantic knowledge, which is helpful for resolving ambiguities in diacritization. We apply our method to the Penn Arabic Treebank and report a new state-of-the-art word error rate of 4.79%. We also conduct manual and automatic analysis to better understand our method and highlight some of the remaining challenges in diacritization. Our method has applications in text-to-speech, speech-to-speech translation, and other NLP tasks.
Anthology ID:
2022.iwslt-1.2
Volume:
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
Month:
May
Year:
2022
Address:
Dublin, Ireland (in-person and online)
Venues:
ACL | IWSLT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–21
Language:
URL:
https://aclanthology.org/2022.iwslt-1.2
DOI:
10.18653/v1/2022.iwslt-1.2
Bibkey:
Cite (ACL):
Brian Thompson and Ali Alshehri. 2022. Improving Arabic Diacritization by Learning to Diacritize and Translate. In Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 11–21, Dublin, Ireland (in-person and online). Association for Computational Linguistics.
Cite (Informal):
Improving Arabic Diacritization by Learning to Diacritize and Translate (Thompson & Alshehri, IWSLT 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.iwslt-1.2.pdf
Data
WikiMatrix