Dongwei Jiang
2024
Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic
Nathaniel Weir
|
Kate Sanders
|
Orion Weller
|
Shreya Sharma
|
Dongwei Jiang
|
Zhengping Jiang
|
Bhavana Dalvi Mishra
|
Oyvind Tafjord
|
Peter Jansen
|
Peter Clark
|
Benjamin Van Durme
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Recent language models enable new opportunities for structured reasoning with text, such as the construction of intuitive, proof-like textual entailment trees without relying on brittle formal logic. However, progress in this direction has been hampered by a long-standing lack of a clear protocol for determining what _valid decompositional entailment_ is. This absence causes noisy datasets and limited performance gains by modern neuro-symbolic entailment engines. To address these problems, we formulate a consistent and theoretically grounded approach to annotating decompositional entailment and evaluate its impact on LLM-based textual inference. We find that our new dataset, RDTE (Recognizing Decompositional Textual Entailment), has a substantially higher internal consistency than prior decompositional entailment datasets, suggesting that RDTE is a significant step forward in the long-standing problem of forming a clear protocol for discerning entailment. We also find that training an RDTE-oriented entailment classifier via knowledge distillation and employing it in an entailment tree reasoning engine significantly improves both accuracy and proof quality, illustrating the practical benefit of this advance for textual inference.
LeanReasoner: Boosting Complex Logical Reasoning with Lean
Dongwei Jiang
|
Marcio Fonseca
|
Shay Cohen
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large language models (LLMs) often struggle with complex logical reasoning due to logical inconsistencies and the inherent difficulty ofsuch reasoning. We use Lean, a theorem proving framework, to address these challenges. By formalizing logical reasoning problems intotheorems within Lean, we can solve them by proving or disproving the corresponding theorems. This method reduces the risk of logical inconsistencies with the help of Lean’s symbolic solver. It also enhances our ability to treat complex reasoning tasks using Lean’s extensive library of theorem proofs. Our method achieves state-of-the-art performance on the FOLIO dataset and achieves performance near this level on ProofWriter. Notably, these results were accomplished by fine-tuning on fewer than 100 in-domain samples for each dataset
Search
Co-authors
- Nathaniel Weir 1
- Kate Sanders 1
- Orion Weller 1
- Shreya Sharma 1
- Zheng Ping Jiang 1
- show all...