Building Joint Relationship Attention Network for Image-Text Generation

Changzhi Wang, Xiaodong Gu


Abstract
Attention based methods for image-text generation often focus on visual features individually, while ignoring relationship information among image features that provides important guidance for generating sentences. To alleviate this issue, in this work we propose the Joint Relationship Attention Network (JRAN) that novelly explores the relationships among the features. Specifically, different from the previous relationship based approaches that only explore the single relationship in the image, our JRAN can effectively learn two relationships, the visual relationships among region features and the visual-semantic relationships between region features and semantic features, and further make a dynamic trade-off between them during outputting the relationship representation. Moreover, we devise a new relationship based attention, which can adaptively focus on the output relationship representation when predicting different words. Extensive experiments on large-scale MSCOCO and small-scale Flickr30k datasets show that JRAN achieves state-of-the-art performance. More remarkably, JRAN achieves new 28.3% and 58.2% performance in terms of BLEU4 and CIDEr metric on Flickr30k dataset.
Anthology ID:
2022.coling-1.489
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5521–5531
Language:
URL:
https://aclanthology.org/2022.coling-1.489
DOI:
Bibkey:
Cite (ACL):
Changzhi Wang and Xiaodong Gu. 2022. Building Joint Relationship Attention Network for Image-Text Generation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5521–5531, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Building Joint Relationship Attention Network for Image-Text Generation (Wang & Gu, COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.489.pdf
Data
MS COCO