Exploring the Cognitive Knowledge Structure of Large Language Models: An Educational Diagnostic Assessment Approach

Zheyuan Zhang, Jifan Yu, Juanzi Li, Lei Hou


Abstract
Large Language Models (LLMs) have not only exhibited exceptional performance across various tasks, but also demonstrated sparks of intelligence. Recent studies have focused on assessing their capabilities on human exams and revealed their impressive competence in different domains. However, cognitive research on the overall knowledge structure of LLMs is still lacking. In this paper, based on educational diagnostic assessment method, we conduct an evaluation using MoocRadar, a meticulously annotated human test dataset based on Bloom Taxonomy. We aim to reveal the knowledge structures of LLMs and gain insights of their cognitive capabilities. This research emphasizes the significance of investigating LLMs’ knowledge and understanding the disparate cognitive patterns of LLMs. By shedding light on models’ knowledge, researchers can advance development and utilization of LLMs in a more informed and effective manner.
Anthology ID:
2023.findings-emnlp.111
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1643–1650
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.111
DOI:
10.18653/v1/2023.findings-emnlp.111
Bibkey:
Cite (ACL):
Zheyuan Zhang, Jifan Yu, Juanzi Li, and Lei Hou. 2023. Exploring the Cognitive Knowledge Structure of Large Language Models: An Educational Diagnostic Assessment Approach. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1643–1650, Singapore. Association for Computational Linguistics.
Cite (Informal):
Exploring the Cognitive Knowledge Structure of Large Language Models: An Educational Diagnostic Assessment Approach (Zhang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.111.pdf