%0 Conference Proceedings %T Phone-ing it in: Towards Flexible Multi-Modal Language Model Training by Phonetic Representations of Data %A Leong, Colin %A Whitenack, Daniel %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F leong-whitenack-2022-phone %X Multi-modal techniques offer significant untapped potential to unlock improved NLP technology for local languages. However, many advances in language model pre-training are focused on text, a fact that only increases systematic inequalities in the performance of NLP tasks across the world’s languages. In this work, we propose a multi-modal approach to train language models using whatever text and/or audio data might be available in a language. Initial experiments using Swahili and Kinyarwanda data suggest the viability of the approach for downstream Named Entity Recognition (NER) tasks, with models pre-trained on phone data showing an improvement of up to 6% F1-score above models that are trained from scratch. Preprocessing and training code will be uploaded to https://github.com/sil-ai/phone-it-in. %R 10.18653/v1/2022.acl-long.364 %U https://aclanthology.org/2022.acl-long.364 %U https://doi.org/10.18653/v1/2022.acl-long.364 %P 5306-5315