Know your audience: specializing grounded language models with listener subtraction

Aaditya K Singh, David Ding, Andrew Saxe, Felix Hill, Andrew Lampinen


Abstract
Effective communication requires adapting to the idiosyncrasies of each communicative context—such as the common ground shared with each partner. Humans demonstrate this ability to specialize to their audience in many contexts, such as the popular game Dixit. We take inspiration from Dixit to formulate a multi-agent image reference game where a (trained) speaker model is rewarded for describing a target image such that one (pretrained) listener model can correctly identify it among distractors, but another listener cannot. To adapt, the speaker must exploit differences in the knowledge it shares with the different listeners. We show that finetuning an attention-based adapter between a CLIP vision encoder and a large language model in this contrastive, multi-agent setting gives rise to context-dependent natural language specialization from rewards only, without direct supervision. Through controlled experiments, we show that training a speaker with two listeners that perceive differently, using our method, allows the speaker to adapt to the idiosyncracies of the listeners. Furthermore, we show zero-shot transfer of the specialization to real-world data. Our experiments demonstrate a method for specializing grounded language models without direct supervision and highlight the interesting research challenges posed by complex multi-agent communication.
Anthology ID:
2023.eacl-main.279
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3884–3911
Language:
URL:
https://aclanthology.org/2023.eacl-main.279
DOI:
10.18653/v1/2023.eacl-main.279
Bibkey:
Cite (ACL):
Aaditya K Singh, David Ding, Andrew Saxe, Felix Hill, and Andrew Lampinen. 2023. Know your audience: specializing grounded language models with listener subtraction. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3884–3911, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Know your audience: specializing grounded language models with listener subtraction (Singh et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.279.pdf
Video:
 https://aclanthology.org/2023.eacl-main.279.mp4