Context Filtering with Reward Modeling in Question Answering

Sangryul Kim, James Thorne


Abstract
Question Answering (QA) in NLP is the task of finding answers to a query within a relevant context retrieved by a retrieval system. Yet, the mix of relevant and irrelevant information in these contexts can hinder performance enhancements in QA tasks. To address this, we introduce a context filtering approach that removes non-essential details, summarizing crucial content through Reward Modeling. This method emphasizes keeping vital data while omitting the extraneous during summarization model training. We offer a framework for developing efficient QA models by discerning useful information from dataset pairs, bypassing the need for costly human evaluation. Furthermore, we show that our approach can significantly outperform the baseline, as evidenced by a 6.8-fold increase in the EM Per Token (EPT) metric, which we propose as a measure of token efficiency, indicating a notable token-efficiency boost for low-resource settings.
Anthology ID:
2025.coling-main.732
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11048–11055
Language:
URL:
https://aclanthology.org/2025.coling-main.732/
DOI:
Bibkey:
Cite (ACL):
Sangryul Kim and James Thorne. 2025. Context Filtering with Reward Modeling in Question Answering. In Proceedings of the 31st International Conference on Computational Linguistics, pages 11048–11055, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Context Filtering with Reward Modeling in Question Answering (Kim & Thorne, COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.732.pdf