Mingbao Lin


2025

pdf bib
Audio-Reasoner: Improving Reasoning Capability in Large Audio Language Models
Xie Zhifei | Mingbao Lin | Zihang Liu | Pengcheng Wu | Shuicheng Yan | Chunyan Miao
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Recent advancements in multimodal reasoning overlook the audio modality. We introduce Audio-Reasoner, a large-scale audio language model for deep reasoning. We meticulously curated a large-scale and diverse multi-task audio dataset with simple annotations. Then, we leverage closed-source models to conduct secondary labeling, QA generation, along with structured COT process. These datasets together form a high-quality reasoning dataset with 1.2 million reasoning-rich samples, which we name CoTA. Following inference scaling principles, we train Audio-Reasoner on CoTA, enabling it to achieve great logical capabilities in audio reasoning. Experiments show state-of-the-art performance across key benchmarks, including MMAU-mini (+25.42%), AIR-Bench chat/foundation (+14.57%/+10.13%), and MELD (+8.01%). Our findings stress the core of structured CoT training in advancing audio reasoning. The model, dataset, and code are open-sourced at [https://github.com/xzf-thu/Audio-Reasoner](https://github.com/xzf-thu/Audio-Reasoner) or [https://huggingface.co/datasets/zhifeixie/Audio-Reasoner-CoTA](https://huggingface.co/datasets/zhifeixie/Audio-Reasoner-CoTA).

2024

pdf bib
LLMs-as-Instructors: Learning from Errors Toward Automating Model Improvement
Jiahao Ying | Mingbao Lin | Yixin Cao | Wei Tang | Bo Wang | Qianru Sun | Xuanjing Huang | Shuicheng Yan
Findings of the Association for Computational Linguistics: EMNLP 2024

This paper introduces the innovative “LLMs-as-Instructors” framework, which leverages the advanced Large Language Models (LLMs) to autonomously enhance the training of smaller target models. Inspired by the theory of “Learning from Errors”, this framework employs an instructor LLM to meticulously analyze the specific errors within a target model, facilitating targeted and efficient training cycles. Within this framework, we implement two strategies: “Learning from Error,” which focuses solely on incorrect responses to tailor training data, and “Learning from Error by Contrast,” which uses contrastive learning to analyze both correct and incorrect responses for a deeper understanding of errors. Our empirical studies, conducted with several open-source models, demonstrate significant improvements across multiple benchmarks, including mathematical reasoning, coding abilities, and factual knowledge. Notably, the refined Llama-3-8b-Instruction has outperformed ChatGPT, illustrating the effectiveness of our approach. By leveraging the strengths of both strategies, we have attained a more balanced performance improvement on both in-domain and out-of-domain benchmarks.