Matija Franklin


2025

pdf bib
LMUNIT: Fine-grained Evaluation with Natural Language Unit Tests
Jon Saad-Falcon | Rajan Pathe Vivek | William Berrios | Nandita Shankar Naik | Matija Franklin | Bertie Vidgen | Amanpreet Singh | Douwe Kiela | Shikib Mehri
Findings of the Association for Computational Linguistics: EMNLP 2025

As language models become integral to critical workflows, assessing their behavior remains a fundamental challenge – human evaluation is costly and noisy, while automated metrics provide only coarse, difficult-to-interpret signals. We introduce natural language unit tests, a paradigm that decomposes response quality into explicit, testable criteria, along with a unified scoring model, LMUnit, which combines multi-objective training across preferences, direct ratings, and natural language rationales. Through controlled human studies, we show this paradigm significantly improves inter-annotator agreement and enables more effective LLM development workflows. LMUnit achieves state-of-the-art performance on evaluation benchmarks including FLASK, BigGenBench, and RewardBench 2, while maintaining competitive results on the original RewardBench. These results validate both our proposed paradigm and scoring model, suggesting a promising path forward for language model evaluation and development. Our code has been released at github.com/ContextualAI/LMUnit with an MIT license.