Deriving continous grounded meaning representations from referentially structured multimodal contexts

Sina Zarrieß, David Schlangen


Abstract
Corpora of referring expressions paired with their visual referents are a good source for learning word meanings directly grounded in visual representations. Here, we explore additional ways of extracting from them word representations linked to multi-modal context: through expressions that refer to the same object, and through expressions that refer to different objects in the same scene. We show that continuous meaning representations derived from these contexts capture complementary aspects of similarity, , even if not outperforming textual embeddings trained on very large amounts of raw text when tested on standard similarity benchmarks. We propose a new task for evaluating grounded meaning representations—detection of potentially co-referential phrases—and show that it requires precise denotational representations of attribute meanings, which our method provides.
Anthology ID:
D17-1100
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
959–965
Language:
URL:
https://aclanthology.org/D17-1100
DOI:
10.18653/v1/D17-1100
Bibkey:
Cite (ACL):
Sina Zarrieß and David Schlangen. 2017. Deriving continous grounded meaning representations from referentially structured multimodal contexts. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 959–965, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Deriving continous grounded meaning representations from referentially structured multimodal contexts (Zarrieß & Schlangen, EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1100.pdf