Dr.Academy: A Benchmark for Evaluating Questioning Capability in Education for Large Language Models

Yuyan Chen, Songzhou Yan, Panjun Liu, Yanghua Xiao


Abstract
Teachers are important to imparting knowledge and guiding learners, and the role of large language models (LLMs) as potential educators is emerging as an important area of study. Recognizing LLMs’ capability to generate educational content can lead to advances in automated and personalized learning. While LLMs have been tested for their comprehension and problem-solving skills, their capability in teaching remains largely unexplored.In teaching, questioning is a key skill that guides students to analyze, evaluate, and synthesize core concepts and principles.Therefore, our research introduces a benchmark to evaluate the questioning capability in education as a teacher of LLMs through evaluating their generated educational questions, utilizing Anderson and Krathwohl’s taxonomy across general, monodisciplinary, and interdisciplinary domains. We shift the focus from LLMs as learners to LLMs as educators, assessing their teaching capability through guiding them to generate questions. We apply four metrics, including relevance, coverage, representativeness, and consistency, to evaluate the educational quality of LLMs’ outputs. Our results indicate that GPT-4 demonstrates significant potential in teaching general, humanities, and science courses; Claude2 appears more apt as an interdisciplinary teacher. Furthermore, the automatic scores align with human perspectives.
Anthology ID:
2024.acl-long.173
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3138–3167
Language:
URL:
https://aclanthology.org/2024.acl-long.173
DOI:
Bibkey:
Cite (ACL):
Yuyan Chen, Songzhou Yan, Panjun Liu, and Yanghua Xiao. 2024. Dr.Academy: A Benchmark for Evaluating Questioning Capability in Education for Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3138–3167, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Dr.Academy: A Benchmark for Evaluating Questioning Capability in Education for Large Language Models (Chen et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.173.pdf