Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation

Atticus Geiger, Kyle Richardson, Christopher Potts


Abstract
We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions. To facilitate this holistic evaluation, we present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical entailment and negation. In our behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNLI fine-tuning addresses this failure. In our structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI. Probes yield evidence consistent with this conclusion, and our intervention experiments bolster this, showing that the causal dynamics of the model mirror the causal dynamics of this algorithm on subsets of MoNLI. This suggests that the BERT model at least partially embeds a theory of lexical entailment and negation at an algorithmic level.
Anthology ID:
2020.blackboxnlp-1.16
Volume:
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2020
Address:
Online
Venues:
BlackboxNLP | EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
163–173
Language:
URL:
https://aclanthology.org/2020.blackboxnlp-1.16
DOI:
10.18653/v1/2020.blackboxnlp-1.16
Bibkey:
Cite (ACL):
Atticus Geiger, Kyle Richardson, and Christopher Potts. 2020. Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 163–173, Online. Association for Computational Linguistics.
Cite (Informal):
Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation (Geiger et al., BlackboxNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.blackboxnlp-1.16.pdf
Optional supplementary material:
 2020.blackboxnlp-1.16.OptionalSupplementaryMaterial.zip
Code
 atticusg/MoNLI
Data
HELPSNLI