Joel Mackenzie


2022

pdf bib
Accelerating Learned Sparse Indexes Via Term Impact Decomposition
Joel Mackenzie | Antonio Mallia | Alistair Moffat | Matthias Petri
Findings of the Association for Computational Linguistics: EMNLP 2022

Novel inverted index-based learned sparse ranking models provide more effective, but less efficient, retrieval performance compared to traditional ranking models like BM25. In this paper, we introduce a technique we call postings clipping to improve the query efficiency of learned representations. Our technique amplifies the benefit of dynamic pruning query processing techniques by accounting for changes in term importance distributions of learned ranking models. The new clipping mechanism accelerates top-k retrieval by up to 9.6X without any loss in effectiveness.