Comparison of AI and Human Scoring on A Visual Arts Assessment

Ning Jiang, Yue Huang, Jie Chen


Abstract
This study examines reliability and comparability of Generative AI scores versus human ratings on two performance tasks—text-based and drawing-based—in a fourth-grade visual arts assessment. Results show GPT-4 is consistent, aligned with humans but more lenient, and its agreement with humans is slightly lower than that between human raters.
Anthology ID:
2025.aimecon-wip.18
Volume:
Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Works in Progress
Month:
October
Year:
2025
Address:
Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States
Editors:
Joshua Wilson, Christopher Ormerod, Magdalen Beiting Parrish
Venue:
AIME-Con
SIG:
Publisher:
National Council on Measurement in Education (NCME)
Note:
Pages:
147–154
Language:
URL:
https://aclanthology.org/2025.aimecon-wip.18/
DOI:
Bibkey:
Cite (ACL):
Ning Jiang, Yue Huang, and Jie Chen. 2025. Comparison of AI and Human Scoring on A Visual Arts Assessment. In Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Works in Progress, pages 147–154, Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States. National Council on Measurement in Education (NCME).
Cite (Informal):
Comparison of AI and Human Scoring on A Visual Arts Assessment (Jiang et al., AIME-Con 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.aimecon-wip.18.pdf