Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving

Xin Quan, Marco Valentino, Louise Dennis, Andre Freitas


Abstract
Natural language explanations represent a proxy for evaluating explanation-based and multi-step Natural Language Inference (NLI) models. However, assessing the validity of explanations for NLI is challenging as it typically involves the crowd-sourcing of apposite datasets, a process that is time-consuming and prone to logical errors. To address existing limitations, this paper investigates the verification and refinement of natural language explanations through the integration of Large Language Models (LLMs) and Theorem Provers (TPs). Specifically, we present a neuro-symbolic framework, named Explanation-Refiner, that integrates TPs with LLMs to generate and formalise explanatory sentences and suggest potential inference strategies for NLI. In turn, the TP is employed to provide formal guarantees on the logical validity of the explanations and to generate feedback for subsequent improvements. We demonstrate how Explanation-Refiner can be jointly used to evaluate explanatory reasoning, autoformalisation, and error correction mechanisms of state-of-the-art LLMs as well as to automatically enhance the quality of explanations of variable complexity in different domains.
Anthology ID:
2024.emnlp-main.172
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2933–2958
Language:
URL:
https://aclanthology.org/2024.emnlp-main.172
DOI:
Bibkey:
Cite (ACL):
Xin Quan, Marco Valentino, Louise Dennis, and Andre Freitas. 2024. Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2933–2958, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving (Quan et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.172.pdf
Data:
 2024.emnlp-main.172.data.zip