Addressing Few-Shot LLM Classification Instability Through Explanation-Augmented Distillation

William Muntean, Joe Betts


Abstract
This study compares explanation-augmented knowledge distillation with few-shot in-context learning for LLM-based exam question classification. Fine-tuned smaller language models achieved competitive performance with greater consistency than large mode few-shot approaches, which exhibited notable variability across different examples. Hyperparameter selection proved essential, with extremely low learning rates significantly impairing model performance.
Anthology ID:
2025.aimecon-wip.24
Volume:
Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Works in Progress
Month:
October
Year:
2025
Address:
Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States
Editors:
Joshua Wilson, Christopher Ormerod, Magdalen Beiting Parrish
Venue:
AIME-Con
SIG:
Publisher:
National Council on Measurement in Education (NCME)
Note:
Pages:
197–203
Language:
URL:
https://aclanthology.org/2025.aimecon-wip.24/
DOI:
Bibkey:
Cite (ACL):
William Muntean and Joe Betts. 2025. Addressing Few-Shot LLM Classification Instability Through Explanation-Augmented Distillation. In Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Works in Progress, pages 197–203, Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States. National Council on Measurement in Education (NCME).
Cite (Informal):
Addressing Few-Shot LLM Classification Instability Through Explanation-Augmented Distillation (Muntean & Betts, AIME-Con 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.aimecon-wip.24.pdf