Chuan Li


2025

pdf bib
VeriFastScore: Speeding up long-form factuality evaluation
Rishanth Rajendhran | Amir Zadeh | Matthew Sarte | Chuan Li | Mohit Iyyer
Findings of the Association for Computational Linguistics: EMNLP 2025

Metrics like FactScore and VeriScore that evaluate long-form factuality operate by decomposing an input response into atomic claims and then individually verifying each claim. While effective and interpretable, these methods incur numerous LLM calls and can take upwards of 100s to evaluate a single response, limiting their practicality in large-scale evaluation and training scenarios. To address this, we propose VeriFastScore, which leverages synthetic data to fine-tune Llama3.1 8B for simultaneously extracting and verifying all verifiable claims within a given text based on evidence from Google Search. We show that this task cannot be solved via few-shot prompting with closed LLMs due to its complexity: the model receives 4K tokens of evidence on average and needs to concurrently decompose claims, judge their verifiability, and verify them against noisy evidence. However, our fine-tuned VeriFastScore model demonstrates strong correlation with the original VeriScore pipeline at both the example level (r=0.80) and system level (r=0.94) while achieving an overall speedup of 6.6× (9.9 × excluding evidence retrieval) over VeriScore. To facilitate future factuality research, we publicly release our VeriFastScore model and synthetic datasets.

pdf bib
Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical Supervision
Tej Deep Pala | Panshul Sharma | Amir Zadeh | Chuan Li | Soujanya Poria
Findings of the Association for Computational Linguistics: EMNLP 2025

Large Language Models (LLMs) are prone to hallucination, especially during multi‐hop and reasoning-intensive tasks such as mathematical problem solving. While Outcome Reward Models verify only final answers, Process Reward Models (PRMs) score each intermediate step to steer generation toward coherent solutions. We introduce PathFinder‐PRM, a novel hierarchical, error‐aware discriminative PRM that first classifies math and consistency errors at each step, then combines these fine‐grained signals to estimate step correctness. To train PathFinder‐PRM, we construct a 400K‐sample dataset by enriching the human‐annotated PRM800K corpus and RLHFlow Mistral traces with three‐dimensional step‐level labels. On PRMBench, PathFinder‐PRM achieves a new state‐of‐the‐art PRMScore of 67.7, outperforming the prior best (65.5) while using 3× less data. When applied to reward guided greedy search, our model yields prm@8 48.3, a +1.5 point gain over the strongest baseline. These results demonstrate that decoupled error detection and reward estimation not only boost fine‐grained error detection but also substantially improve end‐to‐end, reward‐guided mathematical reasoning with greater data efficiency. Our code is available at https://github.com/declare-lab/PathFinder-PRM.