Measuring Teaching with LLMs

Michael Hardy


Abstract
This paper introduces custom Large Language Models using sentence-level embeddings to measure teaching quality. The models achieve human-level performance in analyzing classroom transcripts, outperforming average human rater correlation. Aggregate model scores align with student learning outcomes, establishing a powerful new methodology for scalable teacher feedback. Important limitations discussed.
Anthology ID:
2025.aimecon-main.40
Volume:
Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers
Month:
October
Year:
2025
Address:
Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States
Editors:
Joshua Wilson, Christopher Ormerod, Magdalen Beiting Parrish
Venue:
AIME-Con
SIG:
Publisher:
National Council on Measurement in Education (NCME)
Note:
Pages:
367–384
Language:
URL:
https://aclanthology.org/2025.aimecon-main.40/
DOI:
Bibkey:
Cite (ACL):
Michael Hardy. 2025. Measuring Teaching with LLMs. In Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers, pages 367–384, Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States. National Council on Measurement in Education (NCME).
Cite (Informal):
Measuring Teaching with LLMs (Hardy, AIME-Con 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.aimecon-main.40.pdf