Limitations in learning an interpreted language with recurrent models

Denis Paperno


Abstract
In this submission I report work in progress on learning simplified interpreted languages by means of recurrent models. The data is constructed to reflect core properties of natural language as modeled in formal syntax and semantics. Preliminary results suggest that LSTM networks do generalise to compositional interpretation, albeit only in the most favorable learning setting.
Anthology ID:
W18-5456
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
384–386
Language:
URL:
https://aclanthology.org/W18-5456
DOI:
10.18653/v1/W18-5456
Bibkey:
Cite (ACL):
Denis Paperno. 2018. Limitations in learning an interpreted language with recurrent models. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 384–386, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Limitations in learning an interpreted language with recurrent models (Paperno, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5456.pdf