Focus! Relevant and Sufficient Context Selection for News Image Captioning

Mingyang Zhou, Grace Luo, Anna Rohrbach, Zhou Yu


Abstract
News Image Captioning requires describing an image by leveraging additional context derived from a news article. Previous works only coarsely leverage the article to extract the necessary context, which makes it challenging for models to identify relevant events and named entities. In our paper, we first demonstrate that by combining more fine-grained context that captures the key named entities (obtained via an oracle) and the global context that summarizes the news, we can dramatically improve the model’s ability to generate accurate news captions. This begs the question, how to automatically extract such key entities from an image? We propose to use pre-trained vision and language retrieval model CLIP to localize the visually grounded entities in the news article, and then capture the non-visual entities via a open relation extraction model. Our experiments demonstrate that by simply selecting better context from the article, we can significantly improve the performance of existing models and achieve the new state-of-the-art performance on multiple benchmarks.
Anthology ID:
2022.findings-emnlp.450
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6078–6088
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.450
DOI:
10.18653/v1/2022.findings-emnlp.450
Bibkey:
Cite (ACL):
Mingyang Zhou, Grace Luo, Anna Rohrbach, and Zhou Yu. 2022. Focus! Relevant and Sufficient Context Selection for News Image Captioning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6078–6088, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Focus! Relevant and Sufficient Context Selection for News Image Captioning (Zhou et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.450.pdf