Using LLMs to identify features of personal and professional skills in an open-response situational judgment test

Cole Walsh, Rodica Ivan, Muhammad Zafar Iqbal, Colleen Robb


Abstract
Current methods for assessing personal and professional skills lack scalability due to reliance on human raters, while NLP-based systems for assessing these skills fail to demonstrate construct validity. This study introduces a new method utilizing LLMs to extract construct-relevant features from responses to an assessment of personal and professional skills.
Anthology ID:
2025.aimecon-main.24
Volume:
Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers
Month:
October
Year:
2025
Address:
Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States
Editors:
Joshua Wilson, Christopher Ormerod, Magdalen Beiting Parrish
Venue:
AIME-Con
SIG:
Publisher:
National Council on Measurement in Education (NCME)
Note:
Pages:
221–230
Language:
URL:
https://aclanthology.org/2025.aimecon-main.24/
DOI:
Bibkey:
Cite (ACL):
Cole Walsh, Rodica Ivan, Muhammad Zafar Iqbal, and Colleen Robb. 2025. Using LLMs to identify features of personal and professional skills in an open-response situational judgment test. In Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers, pages 221–230, Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States. National Council on Measurement in Education (NCME).
Cite (Informal):
Using LLMs to identify features of personal and professional skills in an open-response situational judgment test (Walsh et al., AIME-Con 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.aimecon-main.24.pdf