Leaner and Faster: Two-Stage Model Compression for Lightweight Text-Image Retrieval

Siyu Ren, Kenny Zhu


Abstract
Current text-image approaches (e.g., CLIP) typically adopt dual-encoder architecture using pre-trained vision-language representation. However, these models still pose non-trivial memory requirements and substantial incremental indexing time, which makes them less practical on mobile devices. In this paper, we present an effective two-stage framework to compress large pre-trained dual-encoder for lightweight text-image retrieval. The resulting model is smaller (39% of the original), faster (1.6x/2.9x for processing image/text respectively), yet performs on par with or better than the original full model on Flickr30K and MSCOCO benchmarks. We also open-source an accompanying realistic mobile image search application.
Anthology ID:
2022.naacl-main.300
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4085–4090
Language:
URL:
https://aclanthology.org/2022.naacl-main.300
DOI:
10.18653/v1/2022.naacl-main.300
Bibkey:
Cite (ACL):
Siyu Ren and Kenny Zhu. 2022. Leaner and Faster: Two-Stage Model Compression for Lightweight Text-Image Retrieval. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4085–4090, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Leaner and Faster: Two-Stage Model Compression for Lightweight Text-Image Retrieval (Ren & Zhu, NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.300.pdf
Software:
 2022.naacl-main.300.software.zip
Video:
 https://aclanthology.org/2022.naacl-main.300.mp4
Code
 drsy/motis
Data
MS COCO