MMLU-SR: A Benchmark for Stress-Testing Reasoning Capability of Large Language Models

Wentian Wang, Sarthak Jain, Paul Kantor, Jacob Feldman, Lazaros Gallos, Hao Wang


Abstract
We propose MMLU-SR, a novel dataset designed to measure the true comprehension abilities of Large Language Models (LLMs) by challenging their performance in question-answering tasks with modified terms. We reasoned that an agent that “truly” understands a concept can still evaluate it when key terms are replaced by suitably defined alternate terms, and sought to differentiate such comprehension from mere text replacement. In our study, we modified standardized test questions by replacing a key term with a dummy word along with its definition. The key term could be in the context of questions, answers, or both questions and answers. Notwithstanding the high scores achieved by recent popular LLMs on the MMLU leaderboard, we found a substantial reduction in model performance after such replacement, suggesting poor comprehension. This new benchmark provides a rigorous benchmark for testing true model comprehension, and poses a challenge to the broader scientific community.
Anthology ID:
2024.genbench-1.5
Volume:
Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Dieuwke Hupkes, Verna Dankers, Khuyagbaatar Batsuren, Amirhossein Kazemnejad, Christos Christodoulopoulos, Mario Giulianelli, Ryan Cotterell
Venue:
GenBench
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
69–85
Language:
URL:
https://aclanthology.org/2024.genbench-1.5
DOI:
Bibkey:
Cite (ACL):
Wentian Wang, Sarthak Jain, Paul Kantor, Jacob Feldman, Lazaros Gallos, and Hao Wang. 2024. MMLU-SR: A Benchmark for Stress-Testing Reasoning Capability of Large Language Models. In Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP, pages 69–85, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
MMLU-SR: A Benchmark for Stress-Testing Reasoning Capability of Large Language Models (Wang et al., GenBench 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.genbench-1.5.pdf