Leveraging Fine-tuned Large Language Models in Item Parameter Prediction

Suhwa Han, Frank Rijmen, Allison Ames Boykin, Susan Lottridge


Abstract
The study introduces novel approaches for fine-tuning pre-trained LLMs to predict item response theory parameters directly from item texts and structured item attribute variables. The proposed methods were evaluated on a dataset over 1,000 English Language Art items that are currently in the operational pool for a large scale assessment.
Anthology ID:
2025.aimecon-main.27
Volume:
Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers
Month:
October
Year:
2025
Address:
Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States
Editors:
Joshua Wilson, Christopher Ormerod, Magdalen Beiting Parrish
Venue:
AIME-Con
SIG:
Publisher:
National Council on Measurement in Education (NCME)
Note:
Pages:
250–264
Language:
URL:
https://aclanthology.org/2025.aimecon-main.27/
DOI:
Bibkey:
Cite (ACL):
Suhwa Han, Frank Rijmen, Allison Ames Boykin, and Susan Lottridge. 2025. Leveraging Fine-tuned Large Language Models in Item Parameter Prediction. In Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers, pages 250–264, Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States. National Council on Measurement in Education (NCME).
Cite (Informal):
Leveraging Fine-tuned Large Language Models in Item Parameter Prediction (Han et al., AIME-Con 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.aimecon-main.27.pdf