Taicheng Guo


2025

pdf bib
Data Interpreter: An LLM Agent for Data Science
Sirui Hong | Yizhang Lin | Bang Liu | Bangbang Liu | Binhao Wu | Ceyao Zhang | Danyang Li | Jiaqi Chen | Jiayi Zhang | Jinlin Wang | Li Zhang | Lingyao Zhang | Min Yang | Mingchen Zhuge | Taicheng Guo | Tuo Zhou | Wei Tao | Robert Tang | Xiangtao Lu | Xiawu Zheng | Xinbing Liang | Yaying Fei | Yuheng Cheng | Yongxin Ni | Zhibin Gou | Zongze Xu | Yuyu Luo | Chenglin Wu
Findings of the Association for Computational Linguistics: ACL 2025

Large Language Model (LLM)-based agents have excelled in various domains but face significant challenges when applied to data science workflows due to their complex, multi-stage nature. Current LLM-based agents struggle with non-linear relationships, recursive dependencies, implicit data- and logic-dependent reasoning, and managing extensive context. In this paper, we introduce Data Interpreter, an LLM-based agent that addresses these challenges through hierarchical graph-based modeling to represent the complexity and a progressive strategy for step-by-step verification, refinement, and consistent context management. Extensive experiments confirm the effectiveness of Data Interpreter. On InfiAgent-DABench, it boosts performance by 25% (from 75.9% to 94.9%), and on machine learning and open-ended tasks, it lifts accuracy from 88% to 95% and from 60% to 97%, respectively. Moreover, our method surpasses state-of-the-art baselines by 26% on the MATH dataset. We will release the code upon publication.

pdf bib
Dissecting Logical Reasoning in LLMs: A Fine-Grained Evaluation and Supervision Study
Yujun Zhou | Jiayi Ye | Zipeng Ling | Yufei Han | Yue Huang | Haomin Zhuang | Zhenwen Liang | Kehan Guo | Taicheng Guo | Xiangqi Wang | Xiangliang Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025

Logical reasoning is a core capability for large language models (LLMs), yet existing benchmarks that rely solely on final-answer accuracy fail to capture the quality of the reasoning process. To address this, we introduce FineLogic, a fine-grained evaluation framework that assesses logical reasoning across three dimensions: overall accuracy, stepwise soundness, and representation-level probing. Leveraging this framework, we conduct a comprehensive study on how different supervision formats in fine-tuning shape reasoning abilities. We fine-tune LLMs on four supervision styles—one in natural language and three symbolic variants—and find a key trade-off: natural language supervision excels at generalization to out-of-distribution and long-chain problems, whereas symbolic supervision is superior at instilling structurally sound, atomic reasoning steps. Furthermore, our probing analysis indicates that fine-tuning primarily refines the model’s step-by-step generation process, rather than improving its ability to converge on an answer early. Together, our framework and analysis provide a more rigorous lens for evaluating and improving logical reasoning in LLMs. The code is available at https://github.com/YujunZhou/FineLogic.

2024

pdf bib
SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark
Zhenwen Liang | Kehan Guo | Gang Liu | Taicheng Guo | Yujun Zhou | Tianyu Yang | Jiajun Jiao | Renjie Pi | Jipeng Zhang | Xiangliang Zhang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

The paper introduces SceMQA, a novel benchmark for scientific multimodal question answering at the college entrance level. It addresses a critical educational phase often overlooked in existing benchmarks, spanning high school to pre-college levels. SceMQA focuses on core science subjects including Mathematics, Physics, Chemistry, and Biology. It features a blend of multiple-choice and free-response formats, ensuring a comprehensive evaluation of AI models’ abilities. Additionally, our benchmark provides specific knowledge points for each problem and detailed explanations for each answer. SceMQA also uniquely presents problems with identical contexts but varied questions to facilitate a more thorough and accurate assessment of reasoning capabilities. In the experiment, we evaluate both open-source and close-source state-of-the-art Multimodal Large Language Models (MLLMs), across various experimental settings. The results show that further research and development are needed in developing more capable MLLM, as highlighted by only 50% to 60% accuracy achieved by the strongest models.