Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning on Large Language Models

Eldar Kurtic, Amir Moeini, Dan Alistarh


Abstract
We introduce Mathador-LM, a new benchmark for evaluating the mathematical reasoning on large language models (LLMs), combining ruleset interpretation, planning, and problem-solving. This benchmark is inspired by the Mathador game, where the objective is to reach a target number using basic arithmetic operations on a given set of base numbers, following a simple set of rules. We show that, across leading LLMs, we obtain stable average performance while generating benchmark instances dynamically, following a target difficulty level. Thus, our benchmark alleviates concerns about test-set leakage into training data, an issue that often undermines popular benchmarks. Additionally, we conduct a comprehensive evaluation of both open and closed-source state-of-the-art LLMs on Mathador-LM. Our findings reveal that contemporary models struggle with Mathador-LM, scoring significantly lower than average 3rd graders. This stands in stark contrast to their strong performance on popular mathematical reasoning benchmarks. The implementation of Mathador-LM benchmark is available at https://github.com/IST-DASLab/Mathador-LM.
Anthology ID:
2024.emnlp-main.946
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17020–17027
Language:
URL:
https://aclanthology.org/2024.emnlp-main.946
DOI:
10.18653/v1/2024.emnlp-main.946
Bibkey:
Cite (ACL):
Eldar Kurtic, Amir Moeini, and Dan Alistarh. 2024. Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning on Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17020–17027, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning on Large Language Models (Kurtic et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.946.pdf
Software:
 2024.emnlp-main.946.software.zip