Grounding Language in Multi-Perspective Referential Communication

Zineng Tang, Lingjun Mao, Alane Suhr


Abstract
We introduce a task and dataset for referring expression generation and comprehension in multi-agent embodied environments.In this task, two agents in a shared scene must take into account one another’s visual perspective, which may be different from their own, to both produce and understand references to objects in a scene and the spatial relations between them.We collect a dataset of 2,970 human-written referring expressions, each paired with human comprehension judgments, and evaluate the performance of automated models as speakers and listeners paired with human partners, finding that model performance in both reference generation and comprehension lags behind that of pairs of human agents.Finally, we experiment training an open-weight speaker model with evidence of communicative success when paired with a listener, resulting in an improvement from 58.9 to 69.3% in communicative success and even outperforming the strongest proprietary model.
Anthology ID:
2024.emnlp-main.1100
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19727–19741
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1100
DOI:
10.18653/v1/2024.emnlp-main.1100
Bibkey:
Cite (ACL):
Zineng Tang, Lingjun Mao, and Alane Suhr. 2024. Grounding Language in Multi-Perspective Referential Communication. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 19727–19741, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Grounding Language in Multi-Perspective Referential Communication (Tang et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1100.pdf