Mingxi Zou


2025

pdf bib
FinHEAR: Human Expertise and Adaptive Risk-Aware Temporal Reasoning for Financial Decision-Making
Jiaxiang Chen | Mingxi Zou | Zhuo Wang | Qifan Wang | Danny Dongning Sun | Zhang Chi | Zenglin Xu
Findings of the Association for Computational Linguistics: EMNLP 2025

Financial decision-making presents unique challenges for language models, requiring them to handle temporally evolving, risk-sensitive, and event-driven contexts. While large language models (LLMs) demonstrate strong general reasoning abilities, they often overlook key behavioral patterns underlying human financial behavior—such as expert reliance under information asymmetry, loss-averse risk adjustment, and temporal adaptation. We propose FinHEAR, a multi-agent framework for Human Expertise and Adaptive Risk-aware reasoning. FinHEAR coordinates multiple LLM-based agents to capture historical trends, interpret current events, and incorporate expert knowledge within a unified, event-aware pipeline. Grounded in behavioral economics, FinHEAR features mechanisms for expert-guided retrieval to reduce information asymmetry, dynamic position sizing to reflect loss aversion, and feedback-driven refinement to enhance temporal consistency. Experiments on a curated real-world financial dataset show that FinHEAR consistently outperforms strong baselines in both trend forecasting and decision-making.

pdf bib
From Implicit Exploration to Structured Reasoning: Guideline and Refinement for LLMs
Jiaxiang Chen | Zhuo Wang | Mingxi Zou | Zhucong Li | Zhijian Zhou | Song Wang | Zenglin Xu
Findings of the Association for Computational Linguistics: EMNLP 2025

Large language models (LLMs) have advanced general-purpose reasoning, showing strong performance across diverse tasks. However, existing methods often rely on implicit exploration, where the model follows stochastic and unguided reasoning paths—like walking without a map. This leads to unstable reasoning paths, lack of error correction, and limited learning from past experience. To address these issues, we propose a framework that shifts from implicit exploration to structured reasoning through guideline and refinement. First, we extract structured reasoning patterns from successful trajectories and reflective signals from failures. During inference, the model follows these guidelines step-by-step, with refinement applied after each step to correct errors and stabilize the reasoning process. Experiments on the Big-Bench Hard (BBH) benchmark show that our method consistently outperforms strong baselines across diverse reasoning tasks. Analysis reveals that stepwise execution, refinement, and experience-based learning improve stability and generalization. We further explore model collaboration during refinement, offering insights into cross-model interactions. Notably, structured reasoning guided by learned instructions matches or even surpasses knowledge distilled through SFT, highlighting its scalability and effectiveness.