Informative Image Captioning with External Sources of Information

Sanqiang Zhao, Piyush Sharma, Tomer Levinboim, Radu Soricut


Abstract
An image caption should fluently present the essential information in a given image, including informative, fine-grained entity mentions and the manner in which these entities interact. However, current captioning models are usually trained to generate captions that only contain common object names, thus falling short on an important “informativeness” dimension. We present a mechanism for integrating image information together with fine-grained labels (assumed to be generated by some upstream models) into a caption that describes the image in a fluent and informative manner. We introduce a multimodal, multi-encoder model based on Transformer that ingests both image features and multiple sources of entity labels. We demonstrate that we can learn to control the appearance of these entity labels in the output, resulting in captions that are both fluent and informative.
Anthology ID:
P19-1650
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6485–6494
Language:
URL:
https://aclanthology.org/P19-1650
DOI:
10.18653/v1/P19-1650
Bibkey:
Cite (ACL):
Sanqiang Zhao, Piyush Sharma, Tomer Levinboim, and Radu Soricut. 2019. Informative Image Captioning with External Sources of Information. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6485–6494, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Informative Image Captioning with External Sources of Information (Zhao et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1650.pdf
Data
Conceptual Captions