Yaron Fairstein
2024
Evaluating D-MERIT of Partial-annotation on Information Retrieval
Royi Rassin
|
Yaron Fairstein
|
Oren Kalinsky
|
Guy Kushilevitz
|
Nachshon Cohen
|
Alexander Libov
|
Yoav Goldberg
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Extremely efficient online query encoding for dense retrieval
Nachshon Cohen
|
Yaron Fairstein
|
Guy Kushilevitz
Findings of the Association for Computational Linguistics: NAACL 2024
Existing dense retrieval systems utilize the same model architecture for encoding both the passages and the queries, even though queries are much shorter and simpler than passages. This leads to high latency of the query encoding, which is performed online and therefore might impact user experience. We show that combining a standard large passage encoder with a small efficient query encoder can provide significant latency drops with only a small decrease in quality. We offer a pretraining and training solution for multiple small query encoder architectures. Using a small transformer architecture we are able to decrease latency by up to ∼12×, while MRR@10 on the MS MARCO dev set only decreases from 38.2 to 36.2. If this solution does not reach the desired latency requirements, we propose an efficient RNN as the query encoder, which processes the query prefix incrementally and only infers the last word after the query is issued. This shortens latency by ∼38× with only a minor drop in quality, reaching 35.5 MRR@10 score.
Class Balancing for Efficient Active Learning in Imbalanced Datasets
Yaron Fairstein
|
Oren Kalinsky
|
Zohar Karnin
|
Guy Kushilevitz
|
Alexander Libov
|
Sofia Tolmach
Proceedings of The 18th Linguistic Annotation Workshop (LAW-XVIII)
Recent developments in active learning algorithms for NLP tasks show promising results in terms of reducing labelling complexity. In this paper we extend this effort to imbalanced datasets; we bridge between the active learning approach of obtaining diverse andinformative examples, and the heuristic of class balancing used in imbalanced datasets. We develop a novel tune-free weighting technique that canbe applied to various existing active learning algorithms, adding a component of class balancing. We compare several active learning algorithms to their modified version on multiple public datasetsand show that when the classes are imbalanced, with manual annotation effort remaining equal the modified version significantly outperforms the original both in terms of the test metric and the number of obtained minority examples. Moreover, when the imbalance is mild or non-existent (classes are completely balanced), our technique does not harm the base algorithms.
Search
Co-authors
- Guy Kushilevitz 3
- Oren Kalinsky 2
- Nachshon Cohen 2
- Alexander Libov 2
- Royi Rassin 1
- show all...