Quebec Automobile Insurance Question-Answering With Retrieval-Augmented Generation

David Beauchemin, Richard Khoury, Zachary Gagnon


Abstract
Large Language Models (LLMs) perform outstandingly in various downstream tasks, and the use of the Retrieval-Augmented Generation (RAG) architecture has been shown to improve performance for legal question answering (Nuruzzaman and Hussain, 2020; Louis et al., 2024). However, there are limited applications in insurance questions-answering, a specific type of legal document. This paper introduces two corpora: the Quebec Automobile Insurance Expertise Reference Corpus and a set of 82 Expert Answers to Layperson Automobile Insurance Questions. Our study leverages both corpora to automatically and manually assess a GPT4-o, a state-of-the-art (SOTA) LLM, to answer Quebec automobile insurance questions. Our results demonstrate that, on average, using our expertise reference corpus generates better responses on both automatic and manual evaluation metrics. However, they also highlight that LLM QA is unreliable enough for mass utilization in critical areas. Indeed, our results show that between 5% to 13% of answered questions include a false statement that could lead to customer misunderstanding.
Anthology ID:
2024.nllp-1.5
Volume:
Proceedings of the Natural Legal Language Processing Workshop 2024
Month:
November
Year:
2024
Address:
Miami, FL, USA
Editors:
Nikolaos Aletras, Ilias Chalkidis, Leslie Barrett, Cătălina Goanță, Daniel Preoțiuc-Pietro, Gerasimos Spanakis
Venue:
NLLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–60
Language:
URL:
https://aclanthology.org/2024.nllp-1.5
DOI:
Bibkey:
Cite (ACL):
David Beauchemin, Richard Khoury, and Zachary Gagnon. 2024. Quebec Automobile Insurance Question-Answering With Retrieval-Augmented Generation. In Proceedings of the Natural Legal Language Processing Workshop 2024, pages 48–60, Miami, FL, USA. Association for Computational Linguistics.
Cite (Informal):
Quebec Automobile Insurance Question-Answering With Retrieval-Augmented Generation (Beauchemin et al., NLLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.nllp-1.5.pdf