Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and Prompts

Xuyang Wu, Yuan Wang, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang


Abstract
Large vision-language models (LVLMs) have recently achieved significant progress, demonstrating strong capabilities in open-world visual understanding. However, it is not yet clear how LVLMs address demographic biases in real life, especially the disparities across attributes such as gender, skin tone, age and race. In this paper, We empirically investigate visual fairness in several mainstream LVLMs by auditing their performance disparities across demographic attributes using public fairness benchmark datasets (e.g., FACET, UTKFace). Our fairness evaluation framework employs direct and single-choice question prompt on visual question-answering/classification tasks. Despite advancements in visual understanding, our zero-shot prompting results show that both open-source and closed-source LVLMs continue to exhibit fairness issues across different prompts and demographic groups. Furthermore, we propose a potential multi-modal Chain-of-thought (CoT) based strategy for unfairness mitigation, applicable to both open-source and closed-source LVLMs. This approach enhances transparency and offers a scalable solution for addressing fairness, providing a solid foundation for future research and practical efforts in unfairness mitigation. The dataset and code used in this study are publicly available at this GitHub Repository.
Anthology ID:
2025.findings-emnlp.1251
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22973–22991
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1251/
DOI:
Bibkey:
Cite (ACL):
Xuyang Wu, Yuan Wang, Hsin-Tai Wu, Zhiqiang Tao, and Yi Fang. 2025. Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and Prompts. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 22973–22991, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and Prompts (Wu et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1251.pdf
Checklist:
 2025.findings-emnlp.1251.checklist.pdf