Attributing Fair Decisions with Attention Interventions

Ninareh Mehrabi, Umang Gupta, Fred Morstatter, Greg Ver Steeg, Aram Galstyan


Abstract
The widespread use of Artificial Intelligence (AI) in consequential domains, such as health-care and parole decision-making systems, has drawn intense scrutiny on the fairness of these methods. However, ensuring fairness is often insufficient as the rationale for a contentious decision needs to be audited, understood, and defended. We propose that the attention mechanism can be used to ensure fair outcomes while simultaneously providing feature attributions to account for how a decision was made. Toward this goal, we design an attention-based model that can be leveraged as an attribution framework. It can identify features responsible for both performance and fairness of the model through attention interventions and attention weight manipulation. Using this attribution framework, we then design a post-processing bias mitigation strategy and compare it with a suite of baselines. We demonstrate the versatility of our approach by conducting experiments on two distinct data types, tabular and textual.
Anthology ID:
2022.trustnlp-1.2
Volume:
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
Month:
July
Year:
2022
Address:
Seattle, U.S.A.
Editors:
Apurv Verma, Yada Pruksachatkun, Kai-Wei Chang, Aram Galstyan, Jwala Dhamala, Yang Trista Cao
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12–25
Language:
URL:
https://aclanthology.org/2022.trustnlp-1.2
DOI:
10.18653/v1/2022.trustnlp-1.2
Bibkey:
Cite (ACL):
Ninareh Mehrabi, Umang Gupta, Fred Morstatter, Greg Ver Steeg, and Aram Galstyan. 2022. Attributing Fair Decisions with Attention Interventions. In Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), pages 12–25, Seattle, U.S.A.. Association for Computational Linguistics.
Cite (Informal):
Attributing Fair Decisions with Attention Interventions (Mehrabi et al., TrustNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.trustnlp-1.2.pdf
Video:
 https://aclanthology.org/2022.trustnlp-1.2.mp4
Code
 ninarehm/attribution