Can Pretrained Language Models (Yet) Reason Deductively?

Zhangdie Yuan, Songbo Hu, Ivan Vulić, Anna Korhonen, Zaiqiao Meng


Abstract
Acquiring factual knowledge with Pretrained Language Models (PLMs) has attracted increasing attention, showing promising performance in many knowledge-intensive tasks. Their good performance has led the community to believe that the models do possess a modicum of reasoning competence rather than merely memorising the knowledge. In this paper, we conduct a comprehensive evaluation of the learnable deductive (also known as explicit) reasoning capability of PLMs. Through a series of controlled experiments, we posit two main findings. 1) PLMs inadequately generalise learned logic rules and perform inconsistently against simple adversarial surface form edits. 2) While the deductive reasoning fine-tuning of PLMs does improve their performance on reasoning over unseen knowledge facts, it results in catastrophically forgetting the previously learnt knowledge. Our main results suggest that PLMs cannot yet perform reliable deductive reasoning, demonstrating the importance of controlled examinations and probing of PLMs’ deductive reasoning abilities; we reach beyond (misleading) task performance, revealing that PLMs are still far from robust reasoning capabilities, even for simple deductive tasks.
Anthology ID:
2023.eacl-main.106
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1447–1462
Language:
URL:
https://aclanthology.org/2023.eacl-main.106
DOI:
10.18653/v1/2023.eacl-main.106
Bibkey:
Cite (ACL):
Zhangdie Yuan, Songbo Hu, Ivan Vulić, Anna Korhonen, and Zaiqiao Meng. 2023. Can Pretrained Language Models (Yet) Reason Deductively?. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1447–1462, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Can Pretrained Language Models (Yet) Reason Deductively? (Yuan et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.106.pdf
Video:
 https://aclanthology.org/2023.eacl-main.106.mp4