Training for Diversity in Image Paragraph Captioning

Luke Melas-Kyriazi, Alexander Rush, George Han


Abstract
Image paragraph captioning models aim to produce detailed descriptions of a source image. These models use similar techniques as standard image captioning models, but they have encountered issues in text generation, notably a lack of diversity between sentences, that have limited their effectiveness. In this work, we consider applying sequence-level training for this task. We find that standard self-critical training produces poor results, but when combined with an integrated penalty on trigram repetition produces much more diverse paragraphs. This simple training approach improves on the best result on the Visual Genome paragraph captioning dataset from 16.9 to 30.6 CIDEr, with gains on METEOR and BLEU as well, without requiring any architectural changes.
Anthology ID:
D18-1084
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
757–761
Language:
URL:
https://aclanthology.org/D18-1084
DOI:
10.18653/v1/D18-1084
Bibkey:
Cite (ACL):
Luke Melas-Kyriazi, Alexander Rush, and George Han. 2018. Training for Diversity in Image Paragraph Captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 757–761, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Training for Diversity in Image Paragraph Captioning (Melas-Kyriazi et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1084.pdf
Data
Image Paragraph CaptioningVisual Genome