%0 Journal Article %T Sparse, Dense, and Attentional Representations for Text Retrieval %A Luan, Yi %A Eisenstein, Jacob %A Toutanova, Kristina %A Collins, Michael %J Transactions of the Association for Computational Linguistics %D 2021 %V 9 %I MIT Press %C Cambridge, MA %F luan-etal-2021-sparse %X Dual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words models and attentional neural networks. Using both theoretical and empirical analysis, we establish connections between the encoding dimension, the margin between gold and lower-ranked documents, and the document length, suggesting limitations in the capacity of fixed-length encodings to support precise retrieval of long documents. Building on these insights, we propose a simple neural model that combines the efficiency of dual encoders with some of the expressiveness of more costly attentional architectures, and explore sparse-dense hybrids to capitalize on the precision of sparse retrieval. These models outperform strong alternatives in large-scale retrieval. %R 10.1162/tacl_a_00369 %U https://aclanthology.org/2021.tacl-1.20 %U https://doi.org/10.1162/tacl_a_00369 %P 329-345