Good-Enough Compositional Data Augmentation

Jacob Andreas


Abstract
We propose a simple data augmentation protocol aimed at providing a compositional inductive bias in conditional and unconditional sequence models. Under this protocol, synthetic training examples are constructed by taking real training examples and replacing (possibly discontinuous) fragments with other fragments that appear in at least one similar environment. The protocol is model-agnostic and useful for a variety of tasks. Applied to neural sequence-to-sequence models, it reduces error rate by as much as 87% on diagnostic tasks from the SCAN dataset and 16% on a semantic parsing task. Applied to n-gram language models, it reduces perplexity by roughly 1% on small corpora in several languages.
Anthology ID:
2020.acl-main.676
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7556–7566
Language:
URL:
https://aclanthology.org/2020.acl-main.676
DOI:
10.18653/v1/2020.acl-main.676
Bibkey:
Cite (ACL):
Jacob Andreas. 2020. Good-Enough Compositional Data Augmentation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7556–7566, Online. Association for Computational Linguistics.
Cite (Informal):
Good-Enough Compositional Data Augmentation (Andreas, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.676.pdf
Video:
 http://slideslive.com/38928687
Code
 jacobandreas/geca
Data
SCAN