Null-Shot Prompting: Rethinking Prompting Large Language Models With Hallucination

Pittawat Taveekitworachai, Febri Abdullah, Ruck Thawonmas


Abstract
This paper presents a series of investigations into an interesting phenomenon where we observe performance increases in large language models (LLMs) when providing a prompt that causes and exploits hallucination. We propose null-shot prompting, a counter-intuitive approach where we intentionally instruct LLMs to look at and utilize information from a null section. We investigate null-shot prompting on a wide range of tasks, including arithmetic reasoning, commonsense reasoning, and reading comprehension. We observe a substantial increase in performance in arithmetic reasoning tasks for various models, with up to a 44.62% increase compared to a baseline in one model. Therefore, we investigate deeper into this task by utilizing a more challenging mathematics problem-solving benchmark. We observe that LLMs benefit from hallucination in null-shot prompting in this task and discuss the mathematical topics that benefit the most from introducing hallucination in the prompt. We continue our investigation by evaluating hallucination detection abilities of the LLMs when using null-shot prompting. We find surprising results where hallucination in prompts can improve hallucination detection abilities of many LLMs. We also examine the effects of introducing both reasoning, which is known to mitigate hallucination, and hallucination simultaneously in the prompt and observe another surprising turn for the mathematics problem-solving benchmark with many performance improvements. We hope this paper will spark more interest, investigations, and discussions on how hallucination in prompts LLMs and even bolsters them in certain cases.
Anthology ID:
2024.emnlp-main.740
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13321–13361
Language:
URL:
https://aclanthology.org/2024.emnlp-main.740
DOI:
10.18653/v1/2024.emnlp-main.740
Bibkey:
Cite (ACL):
Pittawat Taveekitworachai, Febri Abdullah, and Ruck Thawonmas. 2024. Null-Shot Prompting: Rethinking Prompting Large Language Models With Hallucination. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 13321–13361, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Null-Shot Prompting: Rethinking Prompting Large Language Models With Hallucination (Taveekitworachai et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.740.pdf
Software:
 2024.emnlp-main.740.software.zip
Data:
 2024.emnlp-main.740.data.zip