Unraveling and Mitigating Retriever Inconsistencies in Retrieval-Augmented Large Language Models

Mingda Li, Xinyu Li, Yifan Chen, Wenfeng Xuan, Weinan Zhang


Abstract
Although Retrieval-Augmented Large Language Models (RALMs) demonstrate their superiority in terms of factuality, they do not consistently outperform the original retrieval-free Language Models (LMs). Our experiments reveal that this example-level performance inconsistency exists not only between retrieval-augmented and retrieval-free LM but also among different retrievers. To understand this phenomenon, we investigate the degeneration behavior of RALMs and theoretically decompose it into four categories. Further analysis based on our decomposition reveals that the innate difference in knowledge sources and the unpredictable degeneration of the reader model contribute most to the inconsistency. Drawing from our analysis, we introduce Ensemble of Retrievers (EoR), a trainable framework that can adaptively retrieve from different knowledge sources and effectively decrease unpredictable reader errors. Our experiments on Open Domain Question Answering show that EoR substantially improves performance over the RALM with a single retriever by considerably reducing inconsistent behaviors.
Anthology ID:
2024.findings-acl.288
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4833–4850
Language:
URL:
https://aclanthology.org/2024.findings-acl.288
DOI:
Bibkey:
Cite (ACL):
Mingda Li, Xinyu Li, Yifan Chen, Wenfeng Xuan, and Weinan Zhang. 2024. Unraveling and Mitigating Retriever Inconsistencies in Retrieval-Augmented Large Language Models. In Findings of the Association for Computational Linguistics ACL 2024, pages 4833–4850, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Unraveling and Mitigating Retriever Inconsistencies in Retrieval-Augmented Large Language Models (Li et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.288.pdf