Yann Chevaleyre
2024
Exploring Precision and Recall to assess the quality and diversity of LLMs
Florian Le Bronnec
|
Alexandre Verine
|
Benjamin Negrevergne
|
Yann Chevaleyre
|
Alexandre Allauzen
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We introduce a novel evaluation framework for Large Language Models (LLMs) such as Llama-2 and Mistral, focusing on importing Precision and Recall metrics from image generation to text generation. This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora. By conducting a comprehensive evaluation of state-of-the-art language models, the study reveals new insights into their performance on open-ended generation tasks, which are not adequately captured by traditional benchmarks. The findings highlight a trade-off between the quality and diversity of generated samples, particularly when models are fine-tuned on instruction dataset or with human feedback. This work extends the toolkit for distribution-based NLP evaluation, offering insights into the practical capabilities and challenges that current LLMs face in generating diverse and high-quality text.