It is not True that Transformers are Inductive Learners: Probing NLI Models with External Negation

Michael Sullivan


Abstract
NLI tasks necessitate a substantial degree of logical reasoning; as such, the remarkable performance of SoTA transformers on these tasks may lead us to believe that those models have learned to reason logically. The results presented in this paper demonstrate that (i) models fine-tuned on NLI datasets learn to treat external negation as a distractor, effectively ignoring its presence in hypothesis sentences; (ii) several near-SoTA encoder and encoder-decoder transformer models fail to inductively learn the law of the excluded middle for a single external negation prefix with respect to NLI tasks, despite extensive fine-tuning; (iii) those models which are are able to learn the law of the excluded middle for a single prefix are unable to generalize this pattern to similar prefixes. Given the critical role of negation in logical reasoning, we may conclude from these findings that transformers do not learn to reason logically when fine-tuned for NLI tasks. Furthermore, these results suggest that transformers may not be able to inductively learn the role of negation with respect to NLI tasks, calling into question their capacity to fully acquire logical reasoning abilities.
Anthology ID:
2024.eacl-long.116
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1924–1945
Language:
URL:
https://aclanthology.org/2024.eacl-long.116
DOI:
Bibkey:
Cite (ACL):
Michael Sullivan. 2024. It is not True that Transformers are Inductive Learners: Probing NLI Models with External Negation. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1924–1945, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
It is not True that Transformers are Inductive Learners: Probing NLI Models with External Negation (Sullivan, EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.116.pdf
Video:
 https://aclanthology.org/2024.eacl-long.116.mp4