Is Your Large Language Model Knowledgeable or a Choices-Only Cheater?

Nishant Balepur, Rachel Rudinger


Abstract
Recent work shows that large language models (LLMs) can answer multiple-choice questions using only the choices, but does this mean that MCQA leaderboard rankings of LLMs are largely influenced by abilities in choices-only settings? To answer this, we use a contrast set that probes if LLMs over-rely on choices-only shortcuts in MCQA. While previous works build contrast sets via expensive human annotations or model-generated data which can be biased, we employ graph mining to extract contrast sets from existing MCQA datasets. We use our method on UnifiedQA, a group of six commonsense reasoning datasets with high choices-only accuracy, to build an 820-question contrast set. After validating our contrast set, we test 12 LLMs, finding that these models do not exhibit reliance on choice-only shortcuts when given both the question and choices. Thus, despite the susceptibility of MCQA to high choices-only accuracy, we argue that LLMs are not obtaining high ranks on MCQA leaderboards solely due to their ability to exploit choices-only shortcuts.
Anthology ID:
2024.knowllm-1.2
Volume:
Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Sha Li, Manling Li, Michael JQ Zhang, Eunsol Choi, Mor Geva, Peter Hase, Heng Ji
Venues:
KnowLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15–26
Language:
URL:
https://aclanthology.org/2024.knowllm-1.2
DOI:
10.18653/v1/2024.knowllm-1.2
Bibkey:
Cite (ACL):
Nishant Balepur and Rachel Rudinger. 2024. Is Your Large Language Model Knowledgeable or a Choices-Only Cheater?. In Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024), pages 15–26, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Is Your Large Language Model Knowledgeable or a Choices-Only Cheater? (Balepur & Rudinger, KnowLLM-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.knowllm-1.2.pdf