Discovering and Mitigating Indirect Bias in Attention-Based Model Explanations

Farsheed Haque, Depeng Xu, Shuhan Yuan


Abstract
As the field of Natural Language Processing (NLP) increasingly adopts transformer-based models, the issue of bias becomes more pronounced. Such bias, manifesting through stereotypes and discriminatory practices, can disadvantage certain groups. Our study focuses on direct and indirect bias in the model explanations, where the model makes predictions relying heavily on identity tokens or associated contexts. We present a novel analysis of bias in model explanation, especially the subtle indirect bias, underlining the limitations of traditional fairness metrics. We first define direct and indirect bias in model explanations, which is complementary to fairness in predictions. We then develop an indirect bias discovery algorithm for quantitatively evaluating indirect bias in transformer models using their in-built self-attention matrix. We also propose an indirect bias mitigation algorithm to ensure fairness in transformer models by leveraging attention explanations. Our evaluation shows the significance of indirect bias and the effectiveness of our indirect bias discovery and mitigation.
Anthology ID:
2024.findings-naacl.104
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1599–1614
Language:
URL:
https://aclanthology.org/2024.findings-naacl.104
DOI:
Bibkey:
Cite (ACL):
Farsheed Haque, Depeng Xu, and Shuhan Yuan. 2024. Discovering and Mitigating Indirect Bias in Attention-Based Model Explanations. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 1599–1614, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Discovering and Mitigating Indirect Bias in Attention-Based Model Explanations (Haque et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.104.pdf
Copyright:
 2024.findings-naacl.104.copyright.pdf