Visual Interrogation of Attention-Based Models for Natural Language Inference and Machine Comprehension

Shusen Liu, Tao Li, Zhimin Li, Vivek Srikumar, Valerio Pascucci, Peer-Timo Bremer


Abstract
Neural networks models have gained unprecedented popularity in natural language processing due to their state-of-the-art performance and the flexible end-to-end training scheme. Despite their advantages, the lack of interpretability hinders the deployment and refinement of the models. In this work, we present a flexible visualization library for creating customized visual analytic environments, in which the user can investigate and interrogate the relationships among the input, the model internals (i.e., attention), and the output predictions, which in turn shed light on the model decision-making process.
Anthology ID:
D18-2007
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Month:
November
Year:
2018
Address:
Brussels, Belgium
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
36–41
Language:
URL:
https://aclanthology.org/D18-2007
DOI:
10.18653/v1/D18-2007
Bibkey:
Cite (ACL):
Shusen Liu, Tao Li, Zhimin Li, Vivek Srikumar, Valerio Pascucci, and Peer-Timo Bremer. 2018. Visual Interrogation of Attention-Based Models for Natural Language Inference and Machine Comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 36–41, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Visual Interrogation of Attention-Based Models for Natural Language Inference and Machine Comprehension (Liu et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-2007.pdf