Condenser: a Pre-training Architecture for Dense Retrieval

Luyu Gao, Jamie Callan


Abstract
Pre-trained Transformer language models (LM) have become go-to text representation encoders. Prior research fine-tunes deep LMs to encode text sequences such as sentences and passages into single dense vector representations for efficient text comparison and retrieval. However, dense encoders require a lot of data and sophisticated techniques to effectively train and suffer in low data situations. This paper finds a key reason is that standard LMs’ internal attention structure is not ready-to-use for dense encoders, which needs to aggregate text information into the dense representation. We propose to pre-train towards dense encoder with a novel Transformer architecture, Condenser, where LM prediction CONditions on DENSE Representation. Our experiments show Condenser improves over standard LM by large margins on various text retrieval and similarity tasks.
Anthology ID:
2021.emnlp-main.75
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
981–993
Language:
URL:
https://aclanthology.org/2021.emnlp-main.75
DOI:
10.18653/v1/2021.emnlp-main.75
Bibkey:
Cite (ACL):
Luyu Gao and Jamie Callan. 2021. Condenser: a Pre-training Architecture for Dense Retrieval. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 981–993, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Condenser: a Pre-training Architecture for Dense Retrieval (Gao & Callan, EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.75.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.75.mp4
Code
 luyug/Condenser
Data
GLUEMS MARCONatural QuestionsTriviaQA