MrRank: Improving Question Answering Retrieval System through Multi-Result Ranking Model

Danupat Khamnuansin, Tawunrat Chalothorn, Ekapol Chuangsuwanich


Abstract
Large Language Models (LLMs) often struggle with hallucinations and outdated information. To address this, Information Retrieval (IR) systems can be employed to augment LLMs with up-to-date knowledge. However, existing IR techniques contain deficiencies, posing a performance bottleneck. Given the extensive array of IR systems, combining diverse approaches presents a viable strategy. Nevertheless, prior attempts have yielded restricted efficacy. In this work, we propose an approach that leverages learning-to-rank techniques to combine heterogeneous IR systems. We demonstrate the method on two Retrieval Question Answering (ReQA) tasks. Our empirical findings exhibit a significant performance enhancement, outperforming previous approaches and achieving state-of-the-art results on ReQA SQuAD.
Anthology ID:
2024.findings-acl.282
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4750–4762
Language:
URL:
https://aclanthology.org/2024.findings-acl.282
DOI:
Bibkey:
Cite (ACL):
Danupat Khamnuansin, Tawunrat Chalothorn, and Ekapol Chuangsuwanich. 2024. MrRank: Improving Question Answering Retrieval System through Multi-Result Ranking Model. In Findings of the Association for Computational Linguistics ACL 2024, pages 4750–4762, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
MrRank: Improving Question Answering Retrieval System through Multi-Result Ranking Model (Khamnuansin et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.282.pdf