Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items

Jaap Jumelet, Dieuwke Hupkes


Abstract
In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguistics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.
Anthology ID:
W18-5424
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
222–231
Language:
URL:
https://aclanthology.org/W18-5424
DOI:
10.18653/v1/W18-5424
Bibkey:
Cite (ACL):
Jaap Jumelet and Dieuwke Hupkes. 2018. Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 222–231, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items (Jumelet & Hupkes, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5424.pdf