Do Multilingual Large Language Models Mitigate Stereotype Bias?

Shangrui Nie, Michael Fromm, Charles Welch, Rebekka Görge, Akbar Karimi, Joan Plepi, Nazia Mowmita, Nicolas Flores-Herr, Mehdi Ali, Lucie Flek


Abstract
While preliminary findings indicate that multilingual LLMs exhibit reduced bias compared to monolingual ones, a comprehensive understanding of the effect of multilingual training on bias mitigation, is lacking. This study addresses this gap by systematically training six LLMs of identical size (2.6B parameters) and architecture: five monolingual models (English, German, French, Italian, and Spanish) and one multilingual model trained on an equal distribution of data across these languages, all using publicly available data. To ensure robust evaluation, standard bias benchmarks were automatically translated into the five target languages and verified for both translation quality and bias preservation by human annotators. Our results consistently demonstrate that multilingual training effectively mitigates bias. Moreover, we observe that multilingual models achieve not only lower bias but also superior prediction accuracy when compared to monolingual models with the same amount of training data, model architecture, and size.
Anthology ID:
2024.c3nlp-1.6
Volume:
Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Vinodkumar Prabhakaran, Sunipa Dev, Luciana Benotti, Daniel Hershcovich, Laura Cabello, Yong Cao, Ife Adebara, Li Zhou
Venues:
C3NLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
65–83
Language:
URL:
https://aclanthology.org/2024.c3nlp-1.6
DOI:
10.18653/v1/2024.c3nlp-1.6
Bibkey:
Cite (ACL):
Shangrui Nie, Michael Fromm, Charles Welch, Rebekka Görge, Akbar Karimi, Joan Plepi, Nazia Mowmita, Nicolas Flores-Herr, Mehdi Ali, and Lucie Flek. 2024. Do Multilingual Large Language Models Mitigate Stereotype Bias?. In Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP, pages 65–83, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Do Multilingual Large Language Models Mitigate Stereotype Bias? (Nie et al., C3NLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.c3nlp-1.6.pdf