Towards Multimodal Vision-Language Models Generating Non-Generic Text

Wes Robbins, Zanyar Zohourianshahzadi, Jugal Kalita


Abstract
Vision-language models can assess visual context in an image and generate descriptive text. While the generated text may be accurate and syntactically correct, it is often overly general. To address this, recent work has used optical character recognition to supplement visual information with text extracted from an image. In this work, we contend that vision-language models can benefit from information that can be extracted from an image, but are not used by current models. We modify previous multimodal frameworks to accept relevant information from any number of auxiliary classifiers. In particular, we focus on person names as an additional set of tokens and create a novel image-caption dataset to facilitate captioning with person names. The dataset, Politicians and Athletes in Captions (PAC), consists of captioned images of well-known people in context. By fine-tuning pretrained models with this dataset, we demonstrate a model that can naturally integrate facial recognition tokens into generated text by training on limited data. For the PAC dataset, we provide a discussion on collection and baseline benchmark scores.
Anthology ID:
2021.icon-main.27
Volume:
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2021
Address:
National Institute of Technology Silchar, Silchar, India
Editors:
Sivaji Bandyopadhyay, Sobha Lalitha Devi, Pushpak Bhattacharyya
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
220–230
Language:
URL:
https://aclanthology.org/2021.icon-main.27
DOI:
Bibkey:
Cite (ACL):
Wes Robbins, Zanyar Zohourianshahzadi, and Jugal Kalita. 2021. Towards Multimodal Vision-Language Models Generating Non-Generic Text. In Proceedings of the 18th International Conference on Natural Language Processing (ICON), pages 220–230, National Institute of Technology Silchar, Silchar, India. NLP Association of India (NLPAI).
Cite (Informal):
Towards Multimodal Vision-Language Models Generating Non-Generic Text (Robbins et al., ICON 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.icon-main.27.pdf
Optional supplementary material:
 2021.icon-main.27.OptionalSupplementaryMaterial.pdf
Data
MS COCOTextCaps