Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models

Esma Balkir, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen Fraser


Abstract
Motivations for methods in explainable artificial intelligence (XAI) often include detecting, quantifying and mitigating bias, and contributing to making machine learning models fairer. However, exactly how an XAI method can help in combating biases is often left unspecified. In this paper, we briefly review trends in explainability and fairness in NLP research, identify the current practices in which explainability methods are applied to detect and mitigate bias, and investigate the barriers preventing XAI methods from being used more widely in tackling fairness issues.
Anthology ID:
2022.trustnlp-1.8
Volume:
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
Month:
July
Year:
2022
Address:
Seattle, U.S.A.
Editors:
Apurv Verma, Yada Pruksachatkun, Kai-Wei Chang, Aram Galstyan, Jwala Dhamala, Yang Trista Cao
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
80–92
Language:
URL:
https://aclanthology.org/2022.trustnlp-1.8
DOI:
10.18653/v1/2022.trustnlp-1.8
Bibkey:
Cite (ACL):
Esma Balkir, Svetlana Kiritchenko, Isar Nejadgholi, and Kathleen Fraser. 2022. Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models. In Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), pages 80–92, Seattle, U.S.A.. Association for Computational Linguistics.
Cite (Informal):
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models (Balkir et al., TrustNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.trustnlp-1.8.pdf