Operational Alignment of Confidence-Based Flagging Methods in Automated Scoring

Corey Palermo, Troy Chen, Arianto Wibowo


Abstract
Correct answers to math problems don’t reveal if students understand concepts or just memorized procedures. Conversation-Based Assessment (CBA) addresses this through AI dialogue, but reliable scoring requires costly pilots and specialized expertise. Our Criteria Development Platform (CDP) enables pre-pilot optimization using synthetic data, reducing development from months to days. Testing 17 math items through 68 iterations, all achieved our reliability threshold (MCC ≥ 0.80) after refinement – up from 59% initially. Without refinement, 7 items would have remained below this threshold. By making reliability validation accessible, CDP empowers educators to develop assessments meeting automated scoring standards.
Anthology ID:
2025.aimecon-sessions.6
Volume:
Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Coordinated Session Papers
Month:
October
Year:
2025
Address:
Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States
Editors:
Joshua Wilson, Christopher Ormerod, Magdalen Beiting Parrish
Venue:
AIME-Con
SIG:
Publisher:
National Council on Measurement in Education (NCME)
Note:
Pages:
56–60
Language:
URL:
https://aclanthology.org/2025.aimecon-sessions.6/
DOI:
Bibkey:
Cite (ACL):
Corey Palermo, Troy Chen, and Arianto Wibowo. 2025. Operational Alignment of Confidence-Based Flagging Methods in Automated Scoring. In Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Coordinated Session Papers, pages 56–60, Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States. National Council on Measurement in Education (NCME).
Cite (Informal):
Operational Alignment of Confidence-Based Flagging Methods in Automated Scoring (Palermo et al., AIME-Con 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.aimecon-sessions.6.pdf