Three Questions Concerning the Use of Large Language Models to Facilitate Mathematics Learning

An-Zi Yen, Wei-Ling Hsu


Abstract
Due to the remarkable language understanding and generation abilities of large language models (LLMs), their use in educational applications has been explored. However, little work has been done on investigating the pedagogical ability of LLMs in helping students to learn mathematics. In this position paper, we discuss the challenges associated with employing LLMs to enhance students’ mathematical problem-solving skills by providing adaptive feedback. Apart from generating the wrong reasoning processes, LLMs can misinterpret the meaning of the question, and also exhibit difficulty in understanding the given questions’ rationales when attempting to correct students’ answers. Three research questions are formulated.
Anthology ID:
2023.findings-emnlp.201
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3055–3069
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.201
DOI:
10.18653/v1/2023.findings-emnlp.201
Bibkey:
Cite (ACL):
An-Zi Yen and Wei-Ling Hsu. 2023. Three Questions Concerning the Use of Large Language Models to Facilitate Mathematics Learning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3055–3069, Singapore. Association for Computational Linguistics.
Cite (Informal):
Three Questions Concerning the Use of Large Language Models to Facilitate Mathematics Learning (Yen & Hsu, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.201.pdf