Beyond Agreement: Rethinking Ground Truth in Educational AI Annotation

Danielle R Thomas, Conrad Borchers, Ken Koedinger


Abstract
Humans are biased, inconsistent, and yet we keep trusting them to define “ground truth.” This paper questions the overreliance on inter-rater reliability in educational AI and proposes a multidimensional approach leveraging expert-based approaches and close-the-loop validity to build annotations that reflect impact, not just agreement. It’s time we do better.
Anthology ID:
2025.aimecon-main.37
Volume:
Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers
Month:
October
Year:
2025
Address:
Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States
Editors:
Joshua Wilson, Christopher Ormerod, Magdalen Beiting Parrish
Venue:
AIME-Con
SIG:
Publisher:
National Council on Measurement in Education (NCME)
Note:
Pages:
345–351
Language:
URL:
https://aclanthology.org/2025.aimecon-main.37/
DOI:
Bibkey:
Cite (ACL):
Danielle R Thomas, Conrad Borchers, and Ken Koedinger. 2025. Beyond Agreement: Rethinking Ground Truth in Educational AI Annotation. In Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers, pages 345–351, Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States. National Council on Measurement in Education (NCME).
Cite (Informal):
Beyond Agreement: Rethinking Ground Truth in Educational AI Annotation (Thomas et al., AIME-Con 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.aimecon-main.37.pdf