Scene-Text Aware Image and Text Retrieval with Dual-Encoder

Shumpei Miyawaki, Taku Hasegawa, Kyosuke Nishida, Takuma Kato, Jun Suzuki


Abstract
We tackle the tasks of image and text retrieval using a dual-encoder model in which images and text are encoded independently. This model has attracted attention as an approach that enables efficient offline inferences by connecting both vision and language in the same semantic space; however, whether an image encoder as part of a dual-encoder model can interpret scene-text (i.e., the textual information in images) is unclear. We propose pre-training methods that encourage a joint understanding of the scene-text and surrounding visual information. The experimental results demonstrate that our methods improve the retrieval performances of the dual-encoder models.
Anthology ID:
2022.acl-srw.34
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Samuel Louvan, Andrea Madotto, Brielen Madureira
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
422–433
Language:
URL:
https://aclanthology.org/2022.acl-srw.34
DOI:
10.18653/v1/2022.acl-srw.34
Bibkey:
Cite (ACL):
Shumpei Miyawaki, Taku Hasegawa, Kyosuke Nishida, Takuma Kato, and Jun Suzuki. 2022. Scene-Text Aware Image and Text Retrieval with Dual-Encoder. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 422–433, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Scene-Text Aware Image and Text Retrieval with Dual-Encoder (Miyawaki et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-srw.34.pdf
Data
TextCaps