Find-2-Find: Multitask Learning for Anaphora Resolution and Object Localization

Cennet Oguz, Pascal Denis, Emmanuel Vincent, Simon Ostermann, Josef van Genabith


Abstract
In multimodal understanding tasks, visual and linguistic ambiguities can arise. Visual ambiguity can occur when visual objects require a model to ground a referring expression in a video without strong supervision, while linguistic ambiguity can occur from changes in entities in action flows. As an example from the cooking domain, “oil” mixed with “salt” and “pepper” could later be referred to as a “mixture”. Without a clear visual-linguistic alignment, we cannot know which among several objects shown is referred to by the language expression “mixture”, and without resolved antecedents, we cannot pinpoint what the mixture is. We define this chicken-and-egg problem as Visual-linguistic Ambiguity. In this paper, we present Find2Find, a joint anaphora resolution and object localization dataset targeting the problem of visual-linguistic ambiguity, consisting of 500 anaphora-annotated recipes with corresponding videos. We present experimental results of a novel end-to-end joint multitask learning framework for Find2Find that fuses visual and textual information and shows improvements both for anaphora resolution and object localization with one joint model in multitask learning, as compared to a strong single-task baseline.
Anthology ID:
2023.emnlp-main.504
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8099–8110
Language:
URL:
https://aclanthology.org/2023.emnlp-main.504
DOI:
10.18653/v1/2023.emnlp-main.504
Bibkey:
Cite (ACL):
Cennet Oguz, Pascal Denis, Emmanuel Vincent, Simon Ostermann, and Josef van Genabith. 2023. Find-2-Find: Multitask Learning for Anaphora Resolution and Object Localization. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8099–8110, Singapore. Association for Computational Linguistics.
Cite (Informal):
Find-2-Find: Multitask Learning for Anaphora Resolution and Object Localization (Oguz et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.504.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.504.mp4