Ming Zhang
Peking
Other people with similar names: Ming Zhang (张明; Fudan)
Unverified author pages with similar names: Ming Zhang
2025
Safe: Enhancing Mathematical Reasoning in Large Language Models via Retrospective Step-aware Formal Verification
Chengwu Liu | Ye Yuan | Yichun Yin | Yan Xu | Xin Xu | Zaoyu Chen | Yasheng Wang | Lifeng Shang | Qun Liu | Ming Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Chengwu Liu | Ye Yuan | Yichun Yin | Yan Xu | Xin Xu | Zaoyu Chen | Yasheng Wang | Lifeng Shang | Qun Liu | Ming Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Chain-of-Thought (CoT) prompting has become the de facto method to elicit reasoning capabilities from large language models (LLMs). However, to mitigate hallucinations in CoT that are notoriously difficult to detect, current methods such as process reward models (PRMs) or self-consistency operate as opaque boxes and do not provide checkable evidence for their judgments, possibly limiting their effectiveness. To address this issue, we draw inspiration from the idea that “the gold standard for supporting a mathematical claim is to provide a proof”. We propose a retrospective, step-aware formal verification framework Safe. Rather than assigning arbitrary scores, we strive to articulate mathematical claims in formal mathematical language Lean 4 at each reasoning step and provide formal proofs to identify hallucinations. We evaluate our framework Safe across multiple language models and various mathematical datasets, demonstrating a significant performance improvement while offering interpretable and verifiable evidence. We also propose FormalStep as a benchmark for step correctness theorem proving with 30,809 formal statements. To the best of our knowledge, our work represents the first endeavor to utilize formal mathematical language Lean 4 for verifying content generated by LLMs, aligning with the reason why formal mathematical languages were created in the first place: to provide a robust foundation for hallucination-prone human-written proofs.
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention
Jingyang Yuan | Huazuo Gao | Damai Dai | Junyu Luo | Liang Zhao | Zhengyan Zhang | Zhenda Xie | Yuxing Wei | Lean Wang | Zhiping Xiao | Yuqing Wang | Chong Ruan | Ming Zhang | Wenfeng Liang | Wangding Zeng
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Jingyang Yuan | Huazuo Gao | Damai Dai | Junyu Luo | Liang Zhao | Zhengyan Zhang | Zhenda Xie | Yuxing Wei | Lean Wang | Zhiping Xiao | Yuqing Wang | Chong Ruan | Ming Zhang | Wenfeng Liang | Wangding Zeng
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Long-context modeling is crucial for next-generation language models, yet the high computational cost of standard attention mechanisms poses significant computational challenges. Sparse attention offers a promising direction for improving efficiency while maintaining model capabilities. We present NSA, a Natively trained Sparse Attention mechanism that integrates algorithmic innovations with hardware-aligned optimizations to achieve efficient long-context modeling. NSA employs a dynamic hierarchical sparse strategy, combining coarse-grained token compression with fine-grained token selection to preserve both global context awareness and local precision. Our approach advances sparse attention design with two key innovations: (1) We achieve substantial speedups through arithmetic intensity-balanced algorithm design, with implementation optimizations for modern hardware. (2) We enable end-to-end training, reducing pretraining computation without sacrificing model performance. As shown in Figure 1, experiments show the model pretrained with NSA maintains or exceeds Full Attention models across general benchmarks, long-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves substantial speedups over Full Attention on 64k-length sequences across decoding, forward propagation, and backward propagation, validating its efficiency throughout the model lifecycle.
FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning Evaluation
Junyu Luo | Zhizhuo Kou | Liming Yang | Xiao Luo | Jinsheng Huang | Zhiping Xiao | Jingshu Peng | Chengzhong Liu | Jiaming Ji | Xuanzhe Liu | Sirui Han | Ming Zhang | Yike Guo
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Junyu Luo | Zhizhuo Kou | Liming Yang | Xiao Luo | Jinsheng Huang | Zhiping Xiao | Jingshu Peng | Chengzhong Liu | Jiaming Ji | Xuanzhe Liu | Sirui Han | Ming Zhang | Yike Guo
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Multimodal Large Language Models (MLLMs) have experienced rapid development in recent years. However, in the financial domain, there is a notable lack of effective and specialized multimodal evaluation datasets. To advance the development of MLLMs in the finance domain, we introduce FinMME, encompassing more than 11,000 high-quality financial research samples across 18 financial domains and 6 asset classes, featuring 10 major chart types and 21 subtypes. We ensure data quality through 20 annotators and carefully designed validation mechanisms. Additionally, we develop FinScore, an evaluation system incorporating hallucination penalties and multi-dimensional capability assessment to provide an unbiased evaluation. Extensive experimental results demonstrate that even state-of-the-art models like GPT-4o exhibit unsatisfactory performance on FinMME, highlighting its challenging nature. The benchmark exhibits high robustness with prediction variations under different prompts remaining below 1%, demonstrating superior reliability compared to existing datasets. Our dataset and evaluation protocol are available at https://huggingface.co/datasets/luojunyu/FinMME and https://github.com/luo-junyu/FinMME.
A Survey on Efficient Large Language Model Training: From Data-centric Perspectives
Junyu Luo | Bohan Wu | Xiao Luo | Zhiping Xiao | Yiqiao Jin | Rong-Cheng Tu | Nan Yin | Yifan Wang | Jingyang Yuan | Wei Ju | Ming Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Junyu Luo | Bohan Wu | Xiao Luo | Zhiping Xiao | Yiqiao Jin | Rong-Cheng Tu | Nan Yin | Yifan Wang | Jingyang Yuan | Wei Ju | Ming Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Post-training of Large Language Models (LLMs) is crucial for unlocking their task generalization potential and domain-specific capabilities. However, the current LLM post-training paradigm faces significant data challenges, including the high costs of manual annotation and diminishing marginal returns on data scales. Therefore, achieving data-efficient post-training has become a key research question. In this paper, we present the first systematic survey of data-efficient LLM post-training from a data-centric perspective. We propose a taxonomy of data-efficient LLM post-training methods, covering data selection, data quality enhancement, synthetic data generation, data distillation and compression, and self-evolving data ecosystems. We summarize representative approaches in each category and outline future research directions. By examining the challenges in data-efficient LLM post-training, we highlight open problems and propose potential research avenues. We hope our work inspires further exploration into maximizing the potential of data utilization in large-scale model training. Paper List: https://github.com/luo-junyu/Awesome-Data-Efficient-LLM
How Do Large Language Models Perform in Dynamical System Modeling
Xiao Luo | Binqi Chen | Haixin Wang | Zhiping Xiao | Ming Zhang | Yizhou Sun
Findings of the Association for Computational Linguistics: NAACL 2025
Xiao Luo | Binqi Chen | Haixin Wang | Zhiping Xiao | Ming Zhang | Yizhou Sun
Findings of the Association for Computational Linguistics: NAACL 2025
This paper studies the problem of dynamical system modeling, which involves the evolution of multiple interacting objects. Recent data-driven methods often utilize graph neural networks (GNNs) to learn these interactions by optimizing the neural network in an end-to-end fashion. While large language models (LLMs) have shown exceptional zero-shot performance across various applications, their potential for modeling dynamical systems has not been extensively explored. In this work, we design prompting techniques for dynamical system modeling and systematically evaluate the capabilities of LLMs on two tasks, including dynamic forecasting and relational reasoning. An extensive benchmark LLM4DS across nine datasets is built for performance comparison. Our extensive experiments yield several key findings: (1) LLMs demonstrate competitive performance without training compared to state-of-the-art methods in dynamical system modeling. (2) LLMs effectively infer complex interactions among objects to capture system evolution. (3) Prompt engineering plays a crucial role in enabling LLMs to accurately understand and predict the evolution of systems.
Semi-supervised Fine-tuning for Large Language Models
Junyu Luo | Xiao Luo | Xiusi Chen | Zhiping Xiao | Wei Ju | Ming Zhang
Findings of the Association for Computational Linguistics: NAACL 2025
Junyu Luo | Xiao Luo | Xiusi Chen | Zhiping Xiao | Wei Ju | Ming Zhang
Findings of the Association for Computational Linguistics: NAACL 2025
Supervised fine-tuning (SFT) is crucial in adapting large language models (LLMs) to a specific domain or task. However, only a limited amount of labeled data is available in practical applications, which poses a severe challenge for SFT in yielding satisfactory results. Therefore, a data-efficient framework that can fully exploit labeled and unlabeled data for LLM fine-tuning is highly anticipated.Towards this end, we introduce a **semi-supervised fine-tuning (SemiFT)** task and a framework named **SemiEvol** for LLM alignment from a propagate-and-select manner. For knowledge propagation, SemiEvol adopts a bi-level approach, propagating knowledge from labeled data to unlabeled data through both in-weight and in-context methods. For knowledge selection, SemiEvol incorporates a collaborative learning mechanism, selecting higher-quality pseudo-response samples. We conducted experiments using GPT-4o-mini and Llama-3.1 on seven general or domain-specific datasets, demonstrating significant improvements in model performance on target data. Furthermore, we compared SemiEvol with SFT and self-evolution methods, highlighting its practicality in hybrid data scenarios. Github Repository: [https://github.com/luo-junyu/SemiEvol](https://github.com/luo-junyu/SemiEvol).
Embracing Large Language Models in Traffic Flow Forecasting
Yusheng Zhao | Xiao Luo | Haomin Wen | Zhiping Xiao | Wei Ju | Ming Zhang
Findings of the Association for Computational Linguistics: ACL 2025
Yusheng Zhao | Xiao Luo | Haomin Wen | Zhiping Xiao | Wei Ju | Ming Zhang
Findings of the Association for Computational Linguistics: ACL 2025
Traffic flow forecasting aims to predict future traffic flows based on historical traffic conditions and the road network. It is an important problem in intelligent transportation systems, with a plethora of methods being proposed. Existing efforts mainly focus on capturing and utilizing spatio-temporal dependencies to predict future traffic flows. Though promising, they fall short in adapting to test-time environmental changes in traffic conditions. To tackle this challenge, we propose to introduce large language models (LLMs) to help traffic flow forecasting and design a novel method named Large Language Model Enhanced Traffic Flow Predictor (LEAF). LEAF adopts two branches, capturing different spatio-temporal relations using graph and hypergraph structures, respectively. The two branches are first pre-trained individually, and during test time, they yield different predictions. Based on these predictions, a large language model is used to select the most likely result. Then, a ranking loss is applied as the learning objective to enhance the prediction ability of the two branches. Extensive experiments on several datasets demonstrate the effectiveness of LEAF. Our code is available at https://github.com/YushengZhao/LEAF.
Multifaceted Evaluation of Audio-Visual Capability for MLLMs: Effectiveness, Efficiency, Generalizability and Robustness
Yusheng Zhao | Xiao Luo | Junyu Luo | Weizhi Zhang | Zhiping Xiao | Wei Ju | Philip S. Yu | Ming Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Yusheng Zhao | Xiao Luo | Junyu Luo | Weizhi Zhang | Zhiping Xiao | Wei Ju | Philip S. Yu | Ming Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Multi-modal large language models (MLLMs) have recently achieved great success in processing and understanding information from diverse modalities (e.g., text, audio, and visual signals). Despite their growing popularity, there remains a lack of comprehensive evaluation measuring the audio-visual capabilities of these models, especially in diverse scenarios (e.g., distribution shifts and adversarial attacks). In this paper, we present a multifaceted evaluation of the audio-visual capability of MLLMs, focusing on four key dimensions: effectiveness, efficiency, generalizability, and robustness. Through extensive experiments, we find that MLLMs exhibit strong zero-shot and few-shot generalization abilities, enabling them to achieve great performance with limited data. However, their success relies heavily on the vision modality, which impairs performance when visual input is corrupted or missing. Additionally, while MLLMs are susceptible to adversarial samples, they demonstrate greater robustness compared to traditional models. The experimental results and our observations provide new insights into the audio-visual capabilities of MLLMs, highlighting areas for improvement and offering guidance for future research.
HEAL: Hybrid Enhancement with LLM-based Agents for Text-attributed Hypergraph Self-supervised Representation Learning
Ruochang Li | Xiao Luo | Zhiping Xiao | Wei Ju | Ming Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Ruochang Li | Xiao Luo | Zhiping Xiao | Wei Ju | Ming Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
This paper studies the problem of text-attributed hypergraph self-supervised representation learning, which aims to generate discriminative representations of hypergraphs without any annotations for downstream tasks. However, real-world hypergraphs could contain incomplete signals, which could deteriorate the representation learning procedure, especially under label scarcity. Towards this end, we introduce a new perspective that leverages large language models to enhance hypergraph self-supervised learning and propose a novel data-centric approach named Hybrid Hypergraph Enhancement with LLM-based Agents (HEAL). The core of our HEAL is to generate informative nodes and hyperedges through multi-round interaction with LLM-based agents. In particular, we first retrieve similar samples for each node to facilitate the node expansion agent for different views. To generate challenging samples, we measure the gradients for each augmented view and select the most informative one using an evaluation agent. From the structural view, we adopt a topology refinement agent to incorporate new hyperedges for the recovery of missing structural signals. The enhanced hypergraphs would be incorporated into a self-supervised learning framework for discriminative representations. Extensive experiments on several datasets validate the effectiveness of our HEAL in comparison with extensive baselines.
A Survey of RAG-Reasoning Systems in Large Language Models
Yangning Li | Weizhi Zhang | Yuyao Yang | Wei-Chieh Huang | Yaozu Wu | Junyu Luo | Yuanchen Bei | Henry Peng Zou | Xiao Luo | Yusheng Zhao | Chunkit Chan | Yankai Chen | Zhongfen Deng | Yinghui Li | Hai-Tao Zheng | Dongyuan Li | Renhe Jiang | Ming Zhang | Yangqiu Song | Philip S. Yu
Findings of the Association for Computational Linguistics: EMNLP 2025
Yangning Li | Weizhi Zhang | Yuyao Yang | Wei-Chieh Huang | Yaozu Wu | Junyu Luo | Yuanchen Bei | Henry Peng Zou | Xiao Luo | Yusheng Zhao | Chunkit Chan | Yankai Chen | Zhongfen Deng | Yinghui Li | Hai-Tao Zheng | Dongyuan Li | Renhe Jiang | Ming Zhang | Yangqiu Song | Philip S. Yu
Findings of the Association for Computational Linguistics: EMNLP 2025
Retrieval-Augmented Generation (RAG) lifts the factuality of Large Language Models (LLMs) by injecting external knowledge, yet it falls short on problems that demand multi-step inference; conversely, purely reasoning-oriented approaches often hallucinate or mis-ground facts. This survey synthesizes both strands under a unified reasoning-search perspective. We first map how advanced reasoning optimizes each stage of RAG (Reasoning-Enhanced RAG). Then, we show how retrieved knowledge of different type supply missing premises and expand context for complex inference (RAG-Enhanced Reasoning). Finally, we spotlight emerging Synergized RAG-Reasoning frameworks, where (agentic) LLMs iteratively interleave search and thought to achieve state-of-the-art performance across knowledge-intensive benchmarks. We categorize methods, datasets, and open challenges, and outline research avenues toward deeper RAG-Reasoning systems that are more effective, multimodally-adaptive, trustworthy, and human-centric.
MMEvalPro: Calibrating Multimodal Benchmarks Towards Trustworthy and Efficient Evaluation
Jinsheng Huang | Liang Chen | Taian Guo | Fu Zeng | Yusheng Zhao | Bohan Wu | Ye Yuan | Haozhe Zhao | Zhihui Guo | Yichi Zhang | Jingyang Yuan | Wei Ju | Luchen Liu | Tianyu Liu | Baobao Chang | Ming Zhang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Jinsheng Huang | Liang Chen | Taian Guo | Fu Zeng | Yusheng Zhao | Bohan Wu | Ye Yuan | Haozhe Zhao | Zhihui Guo | Yichi Zhang | Jingyang Yuan | Wei Ju | Luchen Liu | Tianyu Liu | Baobao Chang | Ming Zhang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large Multimodal Models (LMMs) exhibit impressive cross-modal understanding and reasoning abilities, often assessed through multiple-choice questions (MCQs) that include an image, a question, and several options. However, many benchmarks used for such evaluations suffer from systematic biases. Remarkably, Large Language Models (LLMs) without any visual perception capabilities achieve non-trivial performance, undermining the credibility of these evaluations. To address this issue while maintaining the efficiency of MCQ evaluations, we propose MMEVALPRO, a benchmark designed to avoid Type-I errors through a trilogy evaluation pipeline and more rigorous metrics. For each original question from existing benchmarks, human annotators augment it by creating one perception question and one knowledge anchor question through a meticulous annotation process. MMEVALPRO comprises 2,138 question triplets, totaling 6,414 distinct questions. Two-thirds of these questions are manually labeled by human experts, while the rest are sourced from existing benchmarks (MMMU, ScienceQA, and MathVista). Compared with the existing benchmarks, our experiments with the latest LLMs and LMMs demonstrate that MMEVALPRO is **more challenging** (the best LMM lags behind human performance by 31.73%, compared to an average gap of 8.03% in previous benchmarks) and **more trustworthy** (the best LLM trails the best LMM by 23.09%, whereas the gap for previous benchmarks is just 14.64%). Our in-depth analysis explains the reason for the large performance gap and justifies the trustworthiness of evaluation, underscoring its significant potential for advancing future research.
Search
Fix author
Co-authors
- Xiao Luo 8
- Zhiping Xiao 8
- Wei Ju 6
- Junyu Luo 6
- Yusheng Zhao 4
- Jingyang Yuan 3
- Jinsheng Huang 2
- Bohan Wu 2
- Philip S. Yu 2
- Ye Yuan 2
- Weizhi Zhang 2
- Yuanchen Bei 1
- Chunkit Chan 1
- Baobao Chang (常宝宝) 1
- Zaoyu Chen 1
- Binqi Chen 1
- Xiusi Chen 1
- Yankai Chen 1
- Liang Chen 1
- Damai Dai 1
- Zhongfen Deng 1
- Huazuo Gao 1
- Yike Guo 1
- Taian Guo 1
- Zhihui Guo 1
- Sirui Han 1
- Wei-Chieh Huang 1
- Jiaming Ji 1
- Renhe Jiang 1
- Yiqiao Jin 1
- Zhizhuo Kou 1
- Ruochang Li 1
- Yangning Li 1
- Yinghui Li 1
- Dongyuan Li 1
- Wenfeng Liang 1
- Chengwu Liu 1
- Qun Liu 1
- Chengzhong Liu 1
- Xuanzhe Liu 1
- Luchen Liu 1
- Tianyu Liu 1
- Jingshu Peng 1
- Chong Ruan 1
- Lifeng Shang 1
- Yangqiu Song 1
- Yizhou Sun 1
- Rong-Cheng Tu 1
- Yasheng Wang 1
- Lean Wang 1
- Yuqing Wang 1
- Yifan Wang 1
- Haixin Wang 1
- Yuxing Wei 1
- Haomin Wen 1
- Yaozu Wu 1
- Zhenda Xie 1
- Yan Xu 1
- Xin Xu 1
- Liming Yang 1
- Yuyao Yang 1
- Yichun Yin 1
- Nan Yin 1
- Wangding Zeng 1
- Fu Zeng 1
- Zhengyan Zhang 1
- Yichi Zhang 1
- Liang Zhao (赵亮) 1
- Haozhe Zhao 1
- Hai-Tao Zheng 1
- Henry Peng Zou 1