Tinghui Zhu
2025
MCiteBench: A Multimodal Benchmark for Generating Text with Citations
Caiyu Hu
|
Yikai Zhang
|
Tinghui Zhu
|
Yiwei Ye
|
Yanghua Xiao
Findings of the Association for Computational Linguistics: EMNLP 2025
Multimodal Large Language Models (MLLMs) have advanced in integrating diverse modalities but frequently suffer from hallucination. A promising solution to mitigate this issue is to generate text with citations, providing a transparent chain for verification. However, existing work primarily focuses on generating citations for text-only content, leaving the challenges of multimodal scenarios largely unexplored. In this paper, we introduce MCiteBench, the first benchmark designed to assess the ability of MLLMs to generate text with citations in multimodal contexts. Our benchmark comprises data derived from academic papers and review-rebuttal interactions, featuring diverse information sources and multimodal content. Experimental results reveal that MLLMs struggle to ground their outputs reliably when handling multimodal input. Further analysis uncovers a systematic modality bias and reveals how models internally rely on different sources when generating citations, offering insights into model behavior and guiding future directions for multimodal citation tasks.
LLM Agents for Education: Advances and Applications
Zhendong Chu
|
Shen Wang
|
Jian Xie
|
Tinghui Zhu
|
Yibo Yan
|
Jingheng Ye
|
Aoxiao Zhong
|
Xuming Hu
|
Jing Liang
|
Philip S. Yu
|
Qingsong Wen
Findings of the Association for Computational Linguistics: EMNLP 2025
Large Language Model (LLM) agents are transforming education by automating complex pedagogical tasks and enhancing both teaching and learning processes. In this survey, we present a systematic review of recent advances in applying LLM agents to address key challenges in educational settings, such as feedback comment generation, curriculum design, etc. We analyze the technologies enabling these agents, including representative datasets, benchmarks, and algorithmic frameworks. Additionally, we highlight key challenges in deploying LLM agents in educational settings, including ethical issues, hallucination and overreliance, and integration with existing educational ecosystems. Beyond the core technical focus, we include in Appendix A a comprehensive overview of domain-specific educational agents, covering areas such as science learning, language learning, and professional development.
Search
Fix author
Co-authors
- Zhendong Chu 1
- Caiyu Hu 1
- Xuming Hu 1
- Jing Liang 1
- Shen Wang 1
- show all...