Continual Adaptation for Efficient Machine Communication

Robert Hawkins, Minae Kwon, Dorsa Sadigh, Noah Goodman


Abstract
To communicate with new partners in new contexts, humans rapidly form new linguistic conventions. Recent neural language models are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on the fly as humans do. We introduce an interactive repeated reference task as a benchmark for models of adaptation in communication and propose a regularized continual learning framework that allows an artificial agent initialized with a generic language model to more accurately and efficiently communicate with a partner over time. We evaluate this framework through simulations on COCO and in real-time reference game experiments with human partners.
Anthology ID:
2020.conll-1.33
Volume:
Proceedings of the 24th Conference on Computational Natural Language Learning
Month:
November
Year:
2020
Address:
Online
Editors:
Raquel Fernández, Tal Linzen
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
408–419
Language:
URL:
https://aclanthology.org/2020.conll-1.33
DOI:
10.18653/v1/2020.conll-1.33
Bibkey:
Cite (ACL):
Robert Hawkins, Minae Kwon, Dorsa Sadigh, and Noah Goodman. 2020. Continual Adaptation for Efficient Machine Communication. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 408–419, Online. Association for Computational Linguistics.
Cite (Informal):
Continual Adaptation for Efficient Machine Communication (Hawkins et al., CoNLL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.conll-1.33.pdf
Optional supplementary material:
 2020.conll-1.33.OptionalSupplementaryMaterial.pdf
Code
 hawkrobe/continual-adaptation
Data
MS COCO