Wenfei Zhou


2024

pdf bib
FOLIO: Natural Language Reasoning with First-Order Logic
Simeng Han | Hailey Schoelkopf | Yilun Zhao | Zhenting Qi | Martin Riddell | Wenfei Zhou | James Coady | David Peng | Yujie Qiao | Luke Benson | Lucy Sun | Alexander Wardle-Solano | Hannah Szabó | Ekaterina Zubova | Matthew Burtell | Jonathan Fan | Yixin Liu | Brian Wong | Malcolm Sailor | Ansong Ni | Linyong Nan | Jungo Kasai | Tao Yu | Rui Zhang | Alexander Fabbri | Wojciech Maciej Kryscinski | Semih Yavuz | Ye Liu | Xi Victoria Lin | Shafiq Joty | Yingbo Zhou | Caiming Xiong | Rex Ying | Arman Cohan | Dragomir Radev
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Large language models (LLMs) have achieved remarkable performance on a variety of natural language understanding tasks. However, existing benchmarks are inadequate in measuring the complex logical reasoning capabilities of a model. We present FOLIO, a human-annotated, logically complex and diverse dataset for reasoning in natural language (NL), equipped with first-order logic (FOL) annotations. FOLIO consists of 1,430 examples (unique conclusions), each paired with one of 487 sets of premises used to deductively reason for the validity of each conclusion. The logical correctness of the premises and conclusions is ensured by their FOL annotations, which are automatically verified by an FOL inference engine. In addition to the main NL reasoning task, NL-FOL pairs in FOLIO constitute a new NL-FOL translation dataset. Our experiments on FOLIO systematically evaluate the FOL reasoning ability of supervised fine-tuning on medium-sized language models. For both NL reasoning and NL-FOL translation, we benchmark multiple state-of-the-art language models. Our results show that a subset of FOLIO remains a challenge for one of the most capable Large Language Model (LLM) publicly available, GPT-4.

pdf bib
On Evaluating the Integration of Reasoning and Action in LLM Agents with Database Question Answering
Linyong Nan | Ellen Zhang | Weijin Zou | Yilun Zhao | Wenfei Zhou | Arman Cohan
Findings of the Association for Computational Linguistics: NAACL 2024

This study introduces a new long-form database question answering dataset designed to evaluate how Large Language Models (LLMs) interact with a SQL interpreter. The task necessitates LLMs to strategically generate multiple SQL queries to retrieve sufficient data from a database, to reason with the acquired context, and to synthesize them into a comprehensive analytical narrative. Our findings highlight that this task poses great challenges even for the state-of-the-art **GPT-4** model. We propose and evaluate two interaction strategies, and provide a fine-grained analysis of the individual stages within the interaction. A key discovery is the identification of two primary bottlenecks hindering effective interaction: the capacity for planning and the ability to generate multiple SQL queries. To address the challenge of accurately assessing answer quality, we introduce a multi-agent evaluation framework that simulates the academic peer-review process, enhancing the precision and reliability of our evaluations. This framework allows for a more nuanced understanding of the strengths and limitations of current LLMs in complex retrieval and reasoning tasks.

pdf bib
P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains
Simeng Han | Aaron Yu | Rui Shen | Zhenting Qi | Martin Riddell | Wenfei Zhou | Yujie Qiao | Yilun Zhao | Semih Yavuz | Ye Liu | Shafiq Joty | Yingbo Zhou | Caiming Xiong | Dragomir Radev | Rex Ying | Arman Cohan
Findings of the Association for Computational Linguistics: EMNLP 2024

Existing methods on understanding the capabilities of LLMs in logical reasoning rely on binary entailment classification or synthetically derived rationales, which are not sufficient for properly assessing model’s capabilities. We present P-FOLIO, a human-annotated dataset consisting of diverse and complex reasoning chains for a set of realistic logical reasoning stories also written by humans. P-FOLIO is collected with an annotation protocol that facilitates humans to annotate well-structured natural language proofs for first-order logic reasoning problems in a step-by-step manner. The number of reasoning steps in P-FOLIO span from 0 to 20. We further use P-FOLIO to evaluate and improve large-language-model (LLM) reasoning capabilities. We evaluate LLM reasoning capabilities at a fine granularity via single-step inference rule classification, with more diverse inference rules of more diverse and higher levels of complexities than previous works. Given that a single model-generated reasoning chain could take a completely different path than the human-annotated one, we sample multiple reasoning chains from a model and use pass@k metrics for evaluating the quality of model-generated reasoning chains. We show that human-written reasoning chains significantly boost the logical reasoning capabilities of LLMs via many-shot prompting and fine-tuning. Furthermore, fine-tuning Llam3-7B on P-FOLIO improves the model performance by 10% or more on three other out-of-domain logical reasoning datasets.