Erin Grant
2018
Exploiting Attention to Reveal Shortcomings in Memory Models
Kaylee Burns
|
Aida Nematzadeh
|
Erin Grant
|
Alison Gopnik
|
Tom Griffiths
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
The decision making processes of deep networks are difficult to understand and while their accuracy often improves with increased architectural complexity, so too does their opacity. Practical use of machine learning models, especially for question and answering applications, demands a system that is interpretable. We analyze the attention of a memory network model to reconcile contradictory performance on a challenging question-answering dataset that is inspired by theory-of-mind experiments. We equate success on questions to task classification, which explains not only test-time failures but also how well the model generalizes to new training conditions.
Evaluating Theory of Mind in Question Answering
Aida Nematzadeh
|
Kaylee Burns
|
Erin Grant
|
Alison Gopnik
|
Tom Griffiths
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
We propose a new dataset for evaluating question answering models with respect to their capacity to reason about beliefs. Our tasks are inspired by theory-of-mind experiments that examine whether children are able to reason about the beliefs of others, in particular when those beliefs differ from reality. We evaluate a number of recent neural models with memory augmentation. We find that all fail on our tasks, which require keeping track of inconsistent states of the world; moreover, the models’ accuracy decreases notably when random sentences are introduced to the tasks at test.
2015
A Computational Cognitive Model of Novel Word Generalization
Aida Nematzadeh
|
Erin Grant
|
Suzanne Stevenson
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
Search