Evaluating Large Language Models via Linguistic Profiling

Alessio Miaschi, Felice Dell’Orletta, Giulia Venturi


Abstract
Large Language Models (LLMs) undergo extensive evaluation against various benchmarks collected in established leaderboards to assess their performance across multiple tasks. However, to the best of our knowledge, there is a lack of comprehensive studies evaluating these models’ linguistic abilities independent of specific tasks. In this paper, we introduce a novel evaluation methodology designed to test LLMs’ sentence generation abilities under specific linguistic constraints. Drawing on the ‘linguistic profiling’ approach, we rigorously investigate the extent to which five LLMs of varying sizes, tested in both zero- and few-shot scenarios, effectively adhere to (morpho)syntactic constraints. Our findings shed light on the linguistic proficiency of LLMs, revealing both their capabilities and limitations in generating linguistically-constrained sentences.
Anthology ID:
2024.emnlp-main.166
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2835–2848
Language:
URL:
https://aclanthology.org/2024.emnlp-main.166
DOI:
Bibkey:
Cite (ACL):
Alessio Miaschi, Felice Dell’Orletta, and Giulia Venturi. 2024. Evaluating Large Language Models via Linguistic Profiling. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2835–2848, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Evaluating Large Language Models via Linguistic Profiling (Miaschi et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.166.pdf
Data:
 2024.emnlp-main.166.data.zip