Julia R.S. Bursten


2020

pdf bib
Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models
Christopher Grimsley | Elijah Mayfield | Julia R.S. Bursten
Proceedings of the Twelfth Language Resources and Evaluation Conference

As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for NLP tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms. We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we assert the impossibility of causal explanations from attention layers over text data. We then introduce NLP researchers to contemporary philosophy of science theories that allow robust yet non-causal reasoning in explanation, giving computer scientists a vocabulary for future research.