Compositional Generalization by Factorizing Alignment and Translation

Jacob Russin, Jason Jo, Randall O’Reilly, Yoshua Bengio


Abstract
Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic generalization outside of the training distribution. However, human learners readily generalize in this way, e.g. by applying known grammatical rules to novel words. Inspired by work in cognitive science suggesting a functional distinction between systems for syntactic and semantic processing, we implement a modification to an existing approach in neural machine translation, imposing an analogous separation between alignment and translation. The resulting architecture substantially outperforms standard recurrent networks on the SCAN dataset, a compositional generalization task, without any additional supervision. Our work suggests that learning to align and to translate in separate modules may be a useful heuristic for capturing compositional structure.
Anthology ID:
2020.acl-srw.42
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
313–327
Language:
URL:
https://aclanthology.org/2020.acl-srw.42
DOI:
10.18653/v1/2020.acl-srw.42
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-srw.42.pdf
Video:
 http://slideslive.com/38928642