Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines

Jan Rygl, Jan Pomikálek, Radim Řehůřek, Michal Růžička, Vít Novotný, Petr Sojka


Abstract
Vector representations and vector space modeling (VSM) play a central role in modern machine learning. We propose a novel approach to ‘vector similarity searching’ over dense semantic representations of words and documents that can be deployed on top of traditional inverted-index-based fulltext engines, taking advantage of their robustness, stability, scalability and ubiquity. We show that this approach allows the indexing and querying of dense vectors in text domains. This opens up exciting avenues for major efficiency gains, along with simpler deployment, scaling and monitoring. The end result is a fast and scalable vector database with a tunable trade-off between vector search performance and quality, backed by a standard fulltext engine such as Elasticsearch. We empirically demonstrate its querying performance and quality by applying this solution to the task of semantic searching over a dense vector representation of the entire English Wikipedia.
Anthology ID:
W17-2611
Volume:
Proceedings of the 2nd Workshop on Representation Learning for NLP
Month:
August
Year:
2017
Address:
Vancouver, Canada
Venues:
RepL4NLP | WS
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
81–90
Language:
URL:
https://aclanthology.org/W17-2611
DOI:
10.18653/v1/W17-2611
Bibkey:
Cite (ACL):
Jan Rygl, Jan Pomikálek, Radim Řehůřek, Michal Růžička, Vít Novotný, and Petr Sojka. 2017. Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 81–90, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines (Rygl et al., 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-2611.pdf