Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality

Yichen Jiang, Xiang Zhou, Mohit Bansal


Abstract
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models. In this work, we analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias (one target sequence can only be mapped to one source sequence), and the tendency to memorize whole examples rather than separating structures from contents. We propose two techniques to address these two issues respectively: Mutual Exclusivity Training that prevents the model from producing seen generations when facing novel examples via an unlikelihood-based loss, and prim2primX data augmentation that automatically diversifies the arguments of every syntactic function to prevent memorizing and provide a compositional inductive bias without exposing test-set data. Combining these two techniques, we show substantial empirical improvements using standard sequence-to-sequence models (LSTMs and Transformers) on two widely-used compositionality datasets: SCAN and COGS. Finally, we provide analysis characterizing the improvements as well as the remaining challenges, and provide detailed ablations of our method.
Anthology ID:
2022.emnlp-main.808
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11778–11793
Language:
URL:
https://aclanthology.org/2022.emnlp-main.808
DOI:
10.18653/v1/2022.emnlp-main.808
Bibkey:
Cite (ACL):
Yichen Jiang, Xiang Zhou, and Mohit Bansal. 2022. Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11778–11793, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality (Jiang et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.808.pdf