Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora

Robert Frank, Jackson Petty


Abstract
Reflexive anaphora present a challenge for semantic interpretation: their meaning varies depending on context in a way that appears to require abstract variables. Past work has raised doubts about the ability of recurrent networks to meet this challenge. In this paper, we explore this question in the context of a fragment of English that incorporates the relevant sort of contextual variability. We consider sequence-to-sequence architectures with recurrent units and show that such networks are capable of learning semantic interpretations for reflexive anaphora which generalize to novel antecedents. We explore the effect of attention mechanisms and different recurrent unit types on the type of training data that is needed for success as measured in two ways: how much lexical support is needed to induce an abstract reflexive meaning (i.e., how many distinct reflexive antecedents must occur during training) and what contexts must a noun phrase occur in to support generalization of reflexive interpretation to this noun phrase?
Anthology ID:
2020.crac-1.16
Volume:
Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference
Month:
December
Year:
2020
Address:
Barcelona, Spain (online)
Venues:
COLING | CRAC
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
154–164
Language:
URL:
https://aclanthology.org/2020.crac-1.16
DOI:
Bibkey:
Cite (ACL):
Robert Frank and Jackson Petty. 2020. Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora. In Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference, pages 154–164, Barcelona, Spain (online). Association for Computational Linguistics.
Cite (Informal):
Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora (Frank & Petty, CRAC 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.crac-1.16.pdf
Code
 clay-lab/transductions
Data
SCAN