Facilitating learning outcome assessment– development of new datasets and analysis of pre-trained language models

Akriti Jindal, Kaylin Kainulainen, Andrew Fisher, Vijay Mago


Abstract
Student mobility reflects academic transfer from one postsecondary institution to another and facilitates students’ educational goals of obtaining multiple credentials and/or advanced training in their field. This process often relies on transfer credit assessment, based on the similarity between learning outcomes, to determine what knowledge and skills were obtained at the sending institution as well as what knowledge and skills need to still be acquired at the receiving institution. As human evaluation can be both a challenging and time-consuming process, algorithms based on natural language processing can be a reliable tool for assessing transfer credit. In this article, we propose two novel datasets in the fields of Anatomy and Computer Science. Our aim is to probe the similarity between learning outcomes utilising pre-trained embedding models and compare their performance to human-annotated results. We found that ALBERT, MPNeT and DistilRoBERTa demonstrated the best ability to predict the similarity between pairs of learning outcomes. However, Davinci - a GPT-3 model which is expected to predict better results - is only able to provide a good qualitative explanation and not an accurate similarity score. The codes and datasets are available at https://github.com/JAkriti/New-Dataset-and-Performance-of-Embedding-Models.
Anthology ID:
2023.clasp-1.4
Volume:
Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD)
Month:
September
Year:
2023
Address:
Gothenburg, Sweden
Editors:
Ellen Breitholtz, Shalom Lappin, Sharid Loaiciga, Nikolai Ilinykh, Simon Dobnik
Venue:
CLASP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
38–47
Language:
URL:
https://aclanthology.org/2023.clasp-1.4
DOI:
Bibkey:
Cite (ACL):
Akriti Jindal, Kaylin Kainulainen, Andrew Fisher, and Vijay Mago. 2023. Facilitating learning outcome assessment– development of new datasets and analysis of pre-trained language models. In Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD), pages 38–47, Gothenburg, Sweden. Association for Computational Linguistics.
Cite (Informal):
Facilitating learning outcome assessment– development of new datasets and analysis of pre-trained language models (Jindal et al., CLASP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.clasp-1.4.pdf