Q&A-LF : A French Question-Answering Benchmark for Measuring Fine-Grained Lexical Knowledge

Alexander Petrov, Alessandra Thais Mancas, Viviane Binet, Antoine Venant, Francois Lareau, Yves Lepage, Phillippe Langlais


Abstract
We introduce Q&A-LF, a French, question-answering benchmark designed to assess the extent to which large language models capture fine-grained lexical knowledge. We investigate the ability of ChatGPT-4o mini, Qwen2.5-14B, Llama3.0-8B, and Llama3.1-8B to answer questions based on lexical functions from Meaning-Text Theory. Using various prompting setups with different levels of examples and context, we find that Qwen and ChatGPT generally outperform Llama models, achieving up to 70% accuracy, while Llama models reach just above 60%. We identify LFs that are particularly easy or especially challenging for the models. We further investigate whether providing sentence-level context and one-shot prompting improve performance, especially on semantically complex functions.
Anthology ID:
2025.ranlp-1.110
Volume:
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Galia Angelova, Maria Kunilovskaya, Marie Escribe, Ruslan Mitkov
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
962–969
Language:
URL:
https://aclanthology.org/2025.ranlp-1.110/
DOI:
Bibkey:
Cite (ACL):
Alexander Petrov, Alessandra Thais Mancas, Viviane Binet, Antoine Venant, Francois Lareau, Yves Lepage, and Phillippe Langlais. 2025. Q&A-LF : A French Question-Answering Benchmark for Measuring Fine-Grained Lexical Knowledge. In Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era, pages 962–969, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Q&A-LF : A French Question-Answering Benchmark for Measuring Fine-Grained Lexical Knowledge (Petrov et al., RANLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.ranlp-1.110.pdf