LogicNMR: Probing the Non-monotonic Reasoning Ability of Pre-trained Language Models

Yeliang Xiu, Zhanhao Xiao, Yongmei Liu


Abstract
The logical reasoning capabilities of pre-trained language models have recently received much attention. As one of the vital reasoning paradigms, non-monotonic reasoning refers to the fact that conclusions may be invalidated with new information. Existing work has constructed a non-monotonic inference dataset 𝛿-NLI and explored the performance of language models on it. However, the 𝛿-NLI dataset is entangled with commonsense reasoning. In this paper, we explore the pure non-monotonic reasoning ability of pre-trained language models. We build a non-monotonic reasoning benchmark, named LogicNMR, with explicit default rules and iterative updates. In the experimental part, the performance of popular language models on LogicNMR is explored from the perspectives of accuracy, generalization, proof-based traceability and robustness. The experimental results show that even though the fine-tuned language models achieve an accuracy of more than 94.4% on LogicNMR, they perform unsatisfactorily, with a significant drop, in generalization and proof-based traceability.
Anthology ID:
2022.findings-emnlp.265
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3616–3626
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.265
DOI:
10.18653/v1/2022.findings-emnlp.265
Bibkey:
Cite (ACL):
Yeliang Xiu, Zhanhao Xiao, and Yongmei Liu. 2022. LogicNMR: Probing the Non-monotonic Reasoning Ability of Pre-trained Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3616–3626, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
LogicNMR: Probing the Non-monotonic Reasoning Ability of Pre-trained Language Models (Xiu et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.265.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.265.mp4