Comparative Analysis of Human and Large Language Model Performance in Pharmacology Multiple-Choice Questions

Ricardo Rodriguez, Stéphane Huet, Benoit Favre, Mickael Rouvier


Abstract
In this article, we study the answers generated by a selection of Large Language Models to a set of Multiple Choice Questions in Pharmacology, and compare them to the answers provided by students, to understand which questions in this clinical domain are difficult for the models when compared to humans and why. We extract the internal logits to infer probability distributions and analyse the main features that determine the difficulty of questions using statistical methods. We also provide an extension to the FrenchMedMCQA dataset, with pairs of question-answers in pharmacology, enriched with student response rate, answer scoring, clinical topics, and annotations on question structure and semantics.
Anthology ID:
2025.ranlp-1.117
Volume:
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Galia Angelova, Maria Kunilovskaya, Marie Escribe, Ruslan Mitkov
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
1022–1029
Language:
URL:
https://aclanthology.org/2025.ranlp-1.117/
DOI:
Bibkey:
Cite (ACL):
Ricardo Rodriguez, Stéphane Huet, Benoit Favre, and Mickael Rouvier. 2025. Comparative Analysis of Human and Large Language Model Performance in Pharmacology Multiple-Choice Questions. In Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era, pages 1022–1029, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Comparative Analysis of Human and Large Language Model Performance in Pharmacology Multiple-Choice Questions (Rodriguez et al., RANLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.ranlp-1.117.pdf