Ilias Leontiadis
2026
Balanced Accuracy: The Right Metric for Evaluating LLM Judges - Explained through Youden’s J statistic
Stephane Collot | Colin Fraser | Justin Zhao | William F. Shen | Timon Willi | Ilias Leontiadis
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Stephane Collot | Colin Fraser | Justin Zhao | William F. Shen | Timon Willi | Ilias Leontiadis
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Rigorous evaluation of large language models (LLMs) relies on comparing models by the prevalence of desirable or undesirable behaviors, such as task pass rates or policy violations. These prevalence estimates are produced by a classifier, either an LLM-as-a-judge or human annotators, making the choice of classifier central to trustworthy evaluation. Common metrics used for this choice, such as Accuracy, Precision, and F1, are sensitive to class imbalance and to arbitrary choices of positive class, and can favor judges that distort prevalence estimates. We show that Youden’s J statistic is theoretically aligned with choosing the best judge to compare models, and that Balanced Accuracy is an equivalent linear transformation of J. Through both analytical arguments and empirical examples and simulations, we demonstrate how selecting judges using Balanced Accuracy leads to better, more robust classifier selection.
2017
Class-based Prediction Errors to Detect Hate Speech with Out-of-vocabulary Words
Joan Serrà | Ilias Leontiadis | Dimitris Spathis | Gianluca Stringhini | Jeremy Blackburn | Athena Vakali
Proceedings of the First Workshop on Abusive Language Online
Joan Serrà | Ilias Leontiadis | Dimitris Spathis | Gianluca Stringhini | Jeremy Blackburn | Athena Vakali
Proceedings of the First Workshop on Abusive Language Online
Common approaches to text categorization essentially rely either on n-gram counts or on word embeddings. This presents important difficulties in highly dynamic or quickly-interacting environments, where the appearance of new words and/or varied misspellings is the norm. A paradigmatic example of this situation is abusive online behavior, with social networks and media platforms struggling to effectively combat uncommon or non-blacklisted hate words. To better deal with these issues in those fast-paced environments, we propose using the error signal of class-based language models as input to text classification algorithms. In particular, we train a next-character prediction model for any given class and then exploit the error of such class-based models to inform a neural network classifier. This way, we shift from the ‘ability to describe’ seen documents to the ‘ability to predict’ unseen content. Preliminary studies using out-of-vocabulary splits from abusive tweet data show promising results, outperforming competitive text categorization strategies by 4-11%.