Developmental Negation Processing in Transformer Language Models

Antonio Laverghetta Jr., John Licato


Abstract
Reasoning using negation is known to be difficult for transformer-based language models. While previous studies have used the tools of psycholinguistics to probe a transformer’s ability to reason over negation, none have focused on the types of negation studied in developmental psychology. We explore how well transformers can process such categories of negation, by framing the problem as a natural language inference (NLI) task. We curate a set of diagnostic questions for our target categories from popular NLI datasets and evaluate how well a suite of models reason over them. We find that models perform consistently better only on certain categories, suggesting clear distinctions in how they are processed.
Anthology ID:
2022.acl-short.60
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
545–551
Language:
URL:
https://aclanthology.org/2022.acl-short.60
DOI:
10.18653/v1/2022.acl-short.60
Bibkey:
Cite (ACL):
Antonio Laverghetta Jr. and John Licato. 2022. Developmental Negation Processing in Transformer Language Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 545–551, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Developmental Negation Processing in Transformer Language Models (Laverghetta Jr. & Licato, ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-short.60.pdf
Software:
 2022.acl-short.60.software.zip
Code
 advancing-machine-human-reasoning-lab/negation-processing-acl-2022
Data
MultiNLI