Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering

Kosuke Akimoto, Kunihiro Takeoka, Masafumi Oyamada


Abstract
Retrieval-augmented generation models augment knowledge encoded in a language model by providing additional relevant external knowledge (context) during generation. Although it has been shown that the quantity and quality of context impact the performance of retrieval-augmented generation models during inference, limited research explores how these characteristics affect model training. This paper explores how context quantity and quality during model training affect the performance of Fusion-in-Decoder (FiD), the state-of-the-art retrieval-augmented generation model, in extractive open-domain question answering tasks. Experimental results suggest that FiD models overfit to context quality during training and show suboptimal performance when evaluated on different context quality. Through the experimental results, we also reveal FiD models trained with different context quality have different cross-attention distribution patterns. Specifically, as context quality during training increases, FiD models tend to attend more uniformly to each passage in context. Finally, based on these observations, we propose a method to mitigate overfitting to specific context quality by introducing bias to the cross-attention distribution, which we demonstrate to be effective in improving the performance of FiD models on different context quality.
Anthology ID:
2023.findings-emnlp.784
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11711–11729
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.784
DOI:
10.18653/v1/2023.findings-emnlp.784
Bibkey:
Cite (ACL):
Kosuke Akimoto, Kunihiro Takeoka, and Masafumi Oyamada. 2023. Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 11711–11729, Singapore. Association for Computational Linguistics.
Cite (Informal):
Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering (Akimoto et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.784.pdf