Towards a More Comprehensive Evaluation for Italian LLMs

Luca Moroni, Simone Conia, Federico Martelli, Roberto Navigli


Abstract
Recent Large Language Models (LLMs) have shown impressive performance in addressing complex aspects of human language. These models have also demonstrated significant capabilities in processing and generating Italian text, achieving state-of-the-art results on current benchmarks for the Italian language. However, the number of such benchmarks is still insufficient. A case in point is the “Open Ita LLM Leaderboard” which only supports three benchmarks, despite being one of the most popular evaluation suite for the evaluation of Italian-speaking LLMs. In this paper, we analyze the current pitfalls of existing evaluation suites and propose two ways to this gap: i) a new suite of automatically-translated benchmarks, drawn from the most popular English benchmarks; and ii) the adaptation of existing manual dataset so that they can be used to complement the evaluation of Italian LLMs. We discuss the pros and cons of both approaches and release all our data to foster further research on the evaluation of Italian-speaking LLMs.
Anthology ID:
2024.clicit-1.67
Volume:
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)
Month:
December
Year:
2024
Address:
Pisa, Italy
Editors:
Felice Dell'Orletta, Alessandro Lenci, Simonetta Montemagni, Rachele Sprugnoli
Venue:
CLiC-it
SIG:
Publisher:
CEUR Workshop Proceedings
Note:
Pages:
584–599
Language:
URL:
https://aclanthology.org/2024.clicit-1.67/
DOI:
Bibkey:
Cite (ACL):
Luca Moroni, Simone Conia, Federico Martelli, and Roberto Navigli. 2024. Towards a More Comprehensive Evaluation for Italian LLMs. In Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024), pages 584–599, Pisa, Italy. CEUR Workshop Proceedings.
Cite (Informal):
Towards a More Comprehensive Evaluation for Italian LLMs (Moroni et al., CLiC-it 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.clicit-1.67.pdf