Assessing Multilingual Fairness in Pre-trained Multimodal Representations

Jialu Wang, Yang Liu, Xin Wang


Abstract
Recently pre-trained multimodal models, such as CLIP, have shown exceptional capabilities towards connecting images and natural language. The textual representations in English can be desirably transferred to multilingualism and support downstream multimodal tasks for different languages. Nevertheless, the principle of multilingual fairness is rarely scrutinized: do multilingual multimodal models treat languages equally? Are their performances biased towards particular languages? To answer these questions, we view language as the fairness recipient and introduce two new fairness notions, multilingual individual fairness and multilingual group fairness, for pre-trained multimodal models. Multilingual individual fairness requires that text snippets expressing similar semantics in different languages connect similarly to images, while multilingual group fairness requires equalized predictive performance across languages. We characterize the extent to which pre-trained multilingual vision-and-language representations are individually fair across languages. However, extensive experiments demonstrate that multilingual representations do not satisfy group fairness: (1) there is a severe multilingual accuracy disparity issue; (2) the errors exhibit biases across languages conditioning the group of people in the images, including race, gender and age.
Anthology ID:
2022.findings-acl.211
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2681–2695
Language:
URL:
https://aclanthology.org/2022.findings-acl.211
DOI:
10.18653/v1/2022.findings-acl.211
Bibkey:
Cite (ACL):
Jialu Wang, Yang Liu, and Xin Wang. 2022. Assessing Multilingual Fairness in Pre-trained Multimodal Representations. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2681–2695, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Assessing Multilingual Fairness in Pre-trained Multimodal Representations (Wang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.211.pdf
Software:
 2022.findings-acl.211.software.tgz
Data
FairFace