Detrimental Contexts in Open-Domain Question Answering

Philhoon Oh, James Thorne


Abstract
For knowledge intensive NLP tasks, it has been widely accepted that accessing more information is a contributing factor to improvements in the model’s end-to-end performance. However, counter-intuitively, too much context can have a negative impact on the model when evaluated on common question answering (QA) datasets. In this paper, we analyze how passages can have a detrimental effect on retrieve-then-read architectures used in question answering. Our empirical evidence indicates that the current read architecture does not fully leverage the retrieved passages and significantly degrades its performance when using the whole passages compared to utilizing subsets of them. Our findings demonstrate that model accuracy can be improved by 10% on two popular QA datasets by filtering out detrimental passages. Additionally, these outcomes are attained by utilizing existing retrieval methods without further training or data. We further highlight the challenges associated with identifying the detrimental passages. First, even with the correct context, the model can make an incorrect prediction, posing a challenge in determining which passages are most influential. Second, evaluation typically considers lexical matching, which is not robust to variations of correct answers. Despite these limitations, our experimental results underscore the pivotal role of identifying and removing these detrimental passages for the context-efficient retrieve-then-read pipeline.
Anthology ID:
2023.findings-emnlp.776
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11589–11605
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.776
DOI:
10.18653/v1/2023.findings-emnlp.776
Bibkey:
Cite (ACL):
Philhoon Oh and James Thorne. 2023. Detrimental Contexts in Open-Domain Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 11589–11605, Singapore. Association for Computational Linguistics.
Cite (Informal):
Detrimental Contexts in Open-Domain Question Answering (Oh & Thorne, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.776.pdf