Flexible Visual Grounding

Yongmin Kim, Chenhui Chu, Sadao Kurohashi


Abstract
Existing visual grounding datasets are artificially made, where every query regarding an entity must be able to be grounded to a corresponding image region, i.e., answerable. However, in real-world multimedia data such as news articles and social media, many entities in the text cannot be grounded to the image, i.e., unanswerable, due to the fact that the text is unnecessarily directly describing the accompanying image. A robust visual grounding model should be able to flexibly deal with both answerable and unanswerable visual grounding. To study this flexible visual grounding problem, we construct a pseudo dataset and a social media dataset including both answerable and unanswerable queries. In order to handle unanswerable visual grounding, we propose a novel method by adding a pseudo image region corresponding to a query that cannot be grounded. The model is then trained to ground to ground-truth regions for answerable queries and pseudo regions for unanswerable queries. In our experiments, we show that our model can flexibly process both answerable and unanswerable queries with high accuracy on our datasets.
Anthology ID:
2022.acl-srw.22
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Samuel Louvan, Andrea Madotto, Brielen Madureira
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
285–299
Language:
URL:
https://aclanthology.org/2022.acl-srw.22
DOI:
10.18653/v1/2022.acl-srw.22
Bibkey:
Cite (ACL):
Yongmin Kim, Chenhui Chu, and Sadao Kurohashi. 2022. Flexible Visual Grounding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 285–299, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Flexible Visual Grounding (Kim et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-srw.22.pdf
Code
 ku-nlp/smd4fvg
Data
ImageNetMS COCORefCOCOVisual GenomeVisual7W