Characterizing LLM Abstention Behavior in Science QA with Context Perturbations

Bingbing Wen, Bill Howe, Lucy Wang


Abstract
The correct model response in the face of uncertainty is to abstain from answering a question so as not to mislead the user. In this work, we study the ability of LLMs to abstain from answering context-dependent science questions when provided insufficient or incorrect context. We probe model sensitivity in several settings: removing gold context, replacing gold context with irrelevant context, and providing additional context beyond what is given. In experiments on four QA datasets with six LLMs, we show that performance varies greatly across models, across the type of context provided, and also by question type; in particular, many LLMs seem unable to abstain from answering boolean questions using standard QA prompts. Our analysis also highlights the unexpected impact of abstention performance on QA task accuracy. Counter-intuitively, in some settings, replacing gold context with irrelevant context or adding irrelevant context to gold context can improve abstention performance in a way that results in improvements in task performance. Our results imply that changes are needed in QA dataset design and evaluation to more effectively assess the correctness and downstream impacts of model abstention.
Anthology ID:
2024.findings-emnlp.197
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3437–3450
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.197
DOI:
Bibkey:
Cite (ACL):
Bingbing Wen, Bill Howe, and Lucy Wang. 2024. Characterizing LLM Abstention Behavior in Science QA with Context Perturbations. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 3437–3450, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Characterizing LLM Abstention Behavior in Science QA with Context Perturbations (Wen et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.197.pdf