Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering

Tobias Schimanski, Jingwei Ni, Mathias Kraus, Elliott Ash, Markus Leippold


Abstract
Advances towards more faithful and traceable answers of Large Language Models (LLMs) are crucial for various research and practical endeavors. One avenue in reaching this goal is basing the answers on reliable sources. However, this Evidence-Based QA has proven to work insufficiently with LLMs in terms of citing the correct sources (source quality) and truthfully representing the information within sources (answer attributability). In this work, we systematically investigate how to robustly fine-tune LLMs for better source quality and answer attributability. Specifically, we introduce a data generation pipeline with automated data quality filters, which can synthesize diversified high-quality training and testing data at scale. We further introduce four test sets to benchmark the robustness of fine-tuned specialist models. Extensive evaluation shows that fine-tuning on synthetic data improves performance on both in- and out-of-distribution. Furthermore, we show that data quality, which can be drastically improved by proposed quality filters, matters more than quantity in improving Evidence-Based QA.
Anthology ID:
2024.acl-long.105
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1913–1931
Language:
URL:
https://aclanthology.org/2024.acl-long.105
DOI:
Bibkey:
Cite (ACL):
Tobias Schimanski, Jingwei Ni, Mathias Kraus, Elliott Ash, and Markus Leippold. 2024. Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1913–1931, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering (Schimanski et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.105.pdf