Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards

Marco Valentino, Ian Pratt-Hartmann, André Freitas


Abstract
An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales, used to build and evaluate models with step-wise inference and explanation generation capabilities. While human-annotated explanations are used as ground-truth for the inference, there is a lack of systematic assessment of their consistency and rigour. In an attempt to provide a critical quality assessment of Explanation Gold Standards (XGSs) for NLI, we propose a systematic annotation methodology, named Explanation Entailment Verification (EEV), to quantify the logical validity of human-annotated explanations. The application of EEV on three mainstream datasets reveals the surprising conclusion that a majority of the explanations, while appearing coherent on the surface, represent logically invalid arguments, ranging from being incomplete to containing clearly identifiable logical errors. This conclusion confirms that the inferential properties of explanations are still poorly formalised and understood, and that additional work on this line of research is necessary to improve the way Explanation Gold Standards are constructed.
Anthology ID:
2021.iwcs-1.8
Volume:
Proceedings of the 14th International Conference on Computational Semantics (IWCS)
Month:
June
Year:
2021
Address:
Groningen, The Netherlands (online)
Editors:
Sina Zarrieß, Johan Bos, Rik van Noord, Lasha Abzianidze
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
76–86
Language:
URL:
https://aclanthology.org/2021.iwcs-1.8
DOI:
Bibkey:
Cite (ACL):
Marco Valentino, Ian Pratt-Hartmann, and André Freitas. 2021. Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards. In Proceedings of the 14th International Conference on Computational Semantics (IWCS), pages 76–86, Groningen, The Netherlands (online). Association for Computational Linguistics.
Cite (Informal):
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards (Valentino et al., IWCS 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.iwcs-1.8.pdf
Data
QASCSNLIWorldtreee-SNLI