A Novel Multi-Step Prompt Approach for LLM-based Q&As on Banking Supervisory Regulation

Daniele Licari, Canio Benedetto, Praveen Bushipaka, Alessandro De Gregorio, Marco De Leonardis, Tommaso Cucinotta


Abstract
This paper investigates the use of large language models (LLMs) in analyzing and answering questions related to banking supervisory regulation concerning reporting obligations. We introduce a multi-step prompt construction method that enhances the context provided to the LLM, resulting in more precise and informative answers. This multi-step approach is compared with a standard “zero-shot” approach, which lacks context enrichment. To assess the quality of the generated responses, we utilize an LLM Evaluator. Our findings indicate that the multi-step approach significantly outperforms the zero-shot method, producing more comprehensive and accurate responses.
Anthology ID:
2024.clicit-1.59
Volume:
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)
Month:
December
Year:
2024
Address:
Pisa, Italy
Editors:
Felice Dell'Orletta, Alessandro Lenci, Simonetta Montemagni, Rachele Sprugnoli
Venue:
CLiC-it
SIG:
Publisher:
CEUR Workshop Proceedings
Note:
Pages:
496–509
Language:
URL:
https://aclanthology.org/2024.clicit-1.59/
DOI:
Bibkey:
Cite (ACL):
Daniele Licari, Canio Benedetto, Praveen Bushipaka, Alessandro De Gregorio, Marco De Leonardis, and Tommaso Cucinotta. 2024. A Novel Multi-Step Prompt Approach for LLM-based Q&As on Banking Supervisory Regulation. In Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024), pages 496–509, Pisa, Italy. CEUR Workshop Proceedings.
Cite (Informal):
A Novel Multi-Step Prompt Approach for LLM-based Q&As on Banking Supervisory Regulation (Licari et al., CLiC-it 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.clicit-1.59.pdf