A Group Fairness Lens for Large Language Models

Guanqun Bi, Yuqiang Xie, Lei Shen, Yanan Cao


Abstract
The rapid advancement of large language models has revolutionized various applications but also raised crucial concerns about their potential to perpetuate biases and unfairness when deployed in social media contexts. Evaluating LLMs’ potential biases and fairness has become crucial, as existing methods rely on limited prompts focusing on just a few groups, lacking a comprehensive categorical perspective. In this paper, we propose evaluating LLM biases from a group fairness lens using a novel hierarchical schema characterizing diverse social groups. Specifically, we construct a dataset, GFair, encapsulating target-attribute combinations across multiple dimensions. In addition, we introduce statement organization, a new open-ended text generation task, to uncover complex biases in LLMs. Extensive evaluations of popular LLMs reveal inherent safety concerns. To mitigate the biases of LLM from a group fairness perspective, we pioneer a novel chain-of-thought method GF-Think to mitigate biases of LLMs from a group fairness perspective. Experimental results demonstrate its efficacy in mitigating bias in LLMs to achieve fairness.
Anthology ID:
2025.findings-emnlp.431
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8117–8139
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.431/
DOI:
Bibkey:
Cite (ACL):
Guanqun Bi, Yuqiang Xie, Lei Shen, and Yanan Cao. 2025. A Group Fairness Lens for Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 8117–8139, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
A Group Fairness Lens for Large Language Models (Bi et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.431.pdf
Checklist:
 2025.findings-emnlp.431.checklist.pdf