In What Languages are Generative Language Models the Most Formal? Analyzing Formality Distribution across Languages

Asım Ersoy, Gerson Vizcarra, Tahsin Mayeesha, Benjamin Muller


Abstract
Multilingual generative language models (LMs) are increasingly fluent in a large variety of languages. Trained on the concatenation of corpora in multiple languages, they enable powerful transfer from high-resource languages to low-resource ones. However, it is still unknown what cultural biases are induced in the predictions of these models. In this work, we focus on one language property highly influenced by culture: formality. We analyze the formality distributions of XGLM and BLOOM’s predictions, two popular generative multilingual language models, in 5 languages. We classify 1,200 generations per language as formal, informal, or incohesive and measure the impact of the prompt formality on the predictions. Overall, we observe a diversity of behaviors across the models and languages. For instance, XGLM generates informal text in Arabic and Bengali when conditioned with informal prompts, much more than BLOOM. In addition, even though both models are highly biased toward the formal style when prompted neutrally, we find that the models generate a significant amount of informal predictions even when prompted with formal text. We release with this work 6,000 annotated samples, paving the way for future work on the formality of generative multilingual LMs.
Anthology ID:
2023.findings-emnlp.175
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2650–2666
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.175
DOI:
10.18653/v1/2023.findings-emnlp.175
Bibkey:
Cite (ACL):
Asım Ersoy, Gerson Vizcarra, Tahsin Mayeesha, and Benjamin Muller. 2023. In What Languages are Generative Language Models the Most Formal? Analyzing Formality Distribution across Languages. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2650–2666, Singapore. Association for Computational Linguistics.
Cite (Informal):
In What Languages are Generative Language Models the Most Formal? Analyzing Formality Distribution across Languages (Ersoy et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.175.pdf