Analysis of LLM’s “Spurious” Correct Answers Using Evidence Information of Multi-hop QA Datasets

Ai Ishii, Naoya Inoue, Hisami Suzuki, Satoshi Sekine


Abstract
Recent LLMs show an impressive accuracy on one of the hallmark tasks of language understanding, namely Question Answering (QA). However, it is not clear if the correct answers provided by LLMs are actually grounded on the correct knowledge related to the question. In this paper, we use multi-hop QA datasets to evaluate the accuracy of the knowledge LLMs use to answer questions, and show that as much as 31% of the correct answers by the LLMs are in fact spurious, i.e., the knowledge LLMs used to ground the answer is wrong while the answer is correct. We present an analysis of these spurious correct answers by GPT-4 using three datasets in two languages, while suggesting future pathways to correct the grounding information using existing external knowledge bases.
Anthology ID:
2024.kallm-1.3
Volume:
Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Russa Biswas, Lucie-Aimée Kaffee, Oshin Agarwal, Pasquale Minervini, Sameer Singh, Gerard de Melo
Venues:
KaLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24–34
Language:
URL:
https://aclanthology.org/2024.kallm-1.3
DOI:
10.18653/v1/2024.kallm-1.3
Bibkey:
Cite (ACL):
Ai Ishii, Naoya Inoue, Hisami Suzuki, and Satoshi Sekine. 2024. Analysis of LLM’s “Spurious” Correct Answers Using Evidence Information of Multi-hop QA Datasets. In Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024), pages 24–34, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Analysis of LLM’s “Spurious” Correct Answers Using Evidence Information of Multi-hop QA Datasets (Ishii et al., KaLLM-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.kallm-1.3.pdf