RankDCG: Rank-Ordering Evaluation Measure

Denys Katerenchuk, Andrew Rosenberg


Abstract
Ranking is used for a wide array of problems, most notably information retrieval (search). Kendall’s τ, Average Precision, and nDCG are a few popular approaches to the evaluation of ranking. When dealing with problems such as user ranking or recommendation systems, all these measures suffer from various problems, including the inability to deal with elements of the same rank, inconsistent and ambiguous lower bound scores, and an inappropriate cost function. We propose a new measure, a modification of the popular nDCG algorithm, named rankDCG, that addresses these problems. We provide a number of criteria for any effective ranking algorithm and show that only rankDCG satisfies them all. Results are presented on constructed and real data sets. We release a publicly available rankDCG evaluation package.
Anthology ID:
L16-1583
Volume:
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Month:
May
Year:
2016
Address:
Portorož, Slovenia
Editors:
Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
3675–3680
Language:
URL:
https://aclanthology.org/L16-1583
DOI:
Bibkey:
Cite (ACL):
Denys Katerenchuk and Andrew Rosenberg. 2016. RankDCG: Rank-Ordering Evaluation Measure. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3675–3680, Portorož, Slovenia. European Language Resources Association (ELRA).
Cite (Informal):
RankDCG: Rank-Ordering Evaluation Measure (Katerenchuk & Rosenberg, LREC 2016)
Copy Citation:
PDF:
https://aclanthology.org/L16-1583.pdf