Methods, Applications, and Directions of Learning-to-Rank in NLP Research

Justin Lee, Gabriel Bernier-Colborne, Tegan Maharaj, Sowmya Vajjala


Abstract
Learning-to-rank (LTR) algorithms aim to order a set of items according to some criteria. They are at the core of applications such as web search and social media recommendations, and are an area of rapidly increasing interest, with the rise of large language models (LLMs) and the widespread impact of these technologies on society. In this paper, we survey the diverse use cases of LTR methods in natural language processing (NLP) research, looking at previously under-studied aspects such as multilingualism in LTR applications and statistical significance testing for LTR problems. We also consider how large language models are changing the LTR landscape. This survey is aimed at NLP researchers and practitioners interested in understanding the formalisms and best practices regarding the application of LTR approaches in their research.
Anthology ID:
2024.findings-naacl.123
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1900–1917
Language:
URL:
https://aclanthology.org/2024.findings-naacl.123
DOI:
10.18653/v1/2024.findings-naacl.123
Bibkey:
Cite (ACL):
Justin Lee, Gabriel Bernier-Colborne, Tegan Maharaj, and Sowmya Vajjala. 2024. Methods, Applications, and Directions of Learning-to-Rank in NLP Research. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 1900–1917, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Methods, Applications, and Directions of Learning-to-Rank in NLP Research (Lee et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.123.pdf