Christine Jou


2024

pdf bib
Altogether: Image Captioning via Re-aligning Alt-text
Hu Xu | Po-Yao Huang | Xiaoqing Tan | Ching-Feng Yeh | Jacob Kahn | Christine Jou | Gargi Ghosh | Omer Levy | Luke Zettlemoyer | Wen-tau Yih | Shang-Wen Li | Saining Xie | Christoph Feichtenhofer
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

This paper focuses on creating synthetic data to improve the quality of image captions. Existing works typically have two shortcomings. First, they caption images from scratch, ignoring existing alt-text metadata, and second, lack transparency if the captioners’ training data (e.g. GPT) is unknown. In this paper, we study a principled approach Altogether based on the key idea to edit and re-align existing alt-texts associated with the images. To generate training data, we perform human annotation where annotators start with the existing alt-text and re-align it to the image content in multiple rounds, consequently constructing captions with rich visual concepts. This differs from prior work that carries out human annotation as a one-time description task solely based on images and annotator knowledge. We train a captioner on this data that generalizes the process of re-aligning alt-texts at scale. Our results show our Altogether approach leads to richer image captions that also improve text-to-image generation and zero-shot image classification tasks.