Aoxiao Zhong
2025
LLM Agents for Education: Advances and Applications
Zhendong Chu
|
Shen Wang
|
Jian Xie
|
Tinghui Zhu
|
Yibo Yan
|
Jingheng Ye
|
Aoxiao Zhong
|
Xuming Hu
|
Jing Liang
|
Philip S. Yu
|
Qingsong Wen
Findings of the Association for Computational Linguistics: EMNLP 2025
Large Language Model (LLM) agents are transforming education by automating complex pedagogical tasks and enhancing both teaching and learning processes. In this survey, we present a systematic review of recent advances in applying LLM agents to address key challenges in educational settings, such as feedback comment generation, curriculum design, etc. We analyze the technologies enabling these agents, including representative datasets, benchmarks, and algorithmic frameworks. Additionally, we highlight key challenges in deploying LLM agents in educational settings, including ethical issues, hallucination and overreliance, and integration with existing educational ecosystems. Beyond the core technical focus, we include in Appendix A a comprehensive overview of domain-specific educational agents, covering areas such as science learning, language learning, and professional development.
2023
An Empirical Analysis of Leveraging Knowledge for Low-Resource Task-Oriented Semantic Parsing
Mayank Kulkarni
|
Aoxiao Zhong
|
Nicolas Guenon des mesnards
|
Sahar Movaghati
|
Mukund Sridhar
|
He Xie
|
Jianhua Lu
Findings of the Association for Computational Linguistics: ACL 2023
Task-oriented semantic parsing has drawn a lot of interest from the NLP community, and especially the voice assistant industry as it enables representing the meaning of user requests with arbitrarily nested semantics, including multiple intents and compound entities. SOTA models are large seq2seq transformers and require hundreds of thousands of annotated examples to be trained. However annotating such data to bootstrap new domains or languages is expensive and error-prone, especially for requests made of nested semantics. In addition large models easily break the tight latency constraints imposed in a user-facing production environment. As part of this work we explore leveraging external knowledge to improve model accuracy in low-resource and low-compute settings. We demonstrate that using knowledge-enhanced encoders inside seq2seq models does not result in performance gains by itself, but jointly learning to uncover entities in addition to the parse generation is a simple yet effective way of improving performance across the board. We show this is especially true in the low-compute scarce-data setting and for entity-rich domains, with relative gains up to 74.48% on the TOPv2 dataset.
Search
Fix author
Co-authors
- Zhendong Chu 1
- Nicolas Guenon des Mesnards 1
- Xuming Hu 1
- Mayank Kulkarni 1
- Jing Liang 1
- show all...