Interpreting Text Classifiers by Learning Context-sensitive Influence of Words

Sawan Kumar, Kalpit Dixit, Kashif Shah


Abstract
Many existing approaches for interpreting text classification models focus on providing importance scores for parts of the input text, such as words, but without a way to test or improve the interpretation method itself. This has the effect of compounding the problem of understanding or building trust in the model, with the interpretation method itself adding to the opacity of the model. Further, importance scores on individual examples are usually not enough to provide a sufficient picture of model behavior. To address these concerns, we propose MOXIE (MOdeling conteXt-sensitive InfluencE of words) with an aim to enable a richer interface for a user to interact with the model being interpreted and to produce testable predictions. In particular, we aim to make predictions for importance scores, counterfactuals and learned biases with MOXIE. In addition, with a global learning objective, MOXIE provides a clear path for testing and improving itself. We evaluate the reliability and efficiency of MOXIE on the task of sentiment analysis.
Anthology ID:
2021.trustnlp-1.7
Volume:
Proceedings of the First Workshop on Trustworthy Natural Language Processing
Month:
June
Year:
2021
Address:
Online
Editors:
Yada Pruksachatkun, Anil Ramakrishna, Kai-Wei Chang, Satyapriya Krishna, Jwala Dhamala, Tanaya Guha, Xiang Ren
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
55–67
Language:
URL:
https://aclanthology.org/2021.trustnlp-1.7
DOI:
10.18653/v1/2021.trustnlp-1.7
Bibkey:
Cite (ACL):
Sawan Kumar, Kalpit Dixit, and Kashif Shah. 2021. Interpreting Text Classifiers by Learning Context-sensitive Influence of Words. In Proceedings of the First Workshop on Trustworthy Natural Language Processing, pages 55–67, Online. Association for Computational Linguistics.
Cite (Informal):
Interpreting Text Classifiers by Learning Context-sensitive Influence of Words (Kumar et al., TrustNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.trustnlp-1.7.pdf
Data
GLUESSTSST-2