Zachary Schultz
2025
Examining decoding items using engine transcriptions and scoring in early literacy assessment
Zachary Schultz
|
Mackenzie Young
|
Debbie Dugdale
|
Susan Lottridge
Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Works in Progress
We investigate the reliability of two scoring approaches to early literacy decoding items, whereby students are shown a word and asked to say it aloud. Approaches were rubric scoring of speech, human or AI transcription with varying explicit scoring rules. Initial results suggest rubric-based approaches perform better than transcription-based methods.