More Bang for your Context: Virtual Documents for Question Answering over Long Documents

Yosi Mass, Boaz Carmeli, Asaf Yehudai, Assaf Toledo, Nathaniel Mills


Abstract
We deal with the problem of Question Answering (QA) over a long document, which poses a challenge for modern Large Language Models (LLMs). Although LLMs can handle increasingly longer context windows, they struggle to effectively utilize the long content. To address this issue, we introduce the concept of a virtual document (VDoc). A VDoc is created by selecting chunks from the original document that are most likely to contain the information needed to answer the user’s question, while ensuring they fit within the LLM’s context window. We hypothesize that providing a short and focused VDoc to the LLM is more effective than filling the entire context window with less relevant information. Our experiments confirm this hypothesis and demonstrate that using VDocs improves results on the QA task.
Anthology ID:
2024.findings-emnlp.757
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12936–12942
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.757
DOI:
Bibkey:
Cite (ACL):
Yosi Mass, Boaz Carmeli, Asaf Yehudai, Assaf Toledo, and Nathaniel Mills. 2024. More Bang for your Context: Virtual Documents for Question Answering over Long Documents. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 12936–12942, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
More Bang for your Context: Virtual Documents for Question Answering over Long Documents (Mass et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.757.pdf