Transferring Representations of Logical Connectives

Aaron Traylor, Ellie Pavlick, Roman Feiman


Abstract
In modern natural language processing pipelines, it is common practice to “pretrain” a generative language model on a large corpus of text, and then to “finetune” the created representations by continuing to train them on a discriminative textual inference task. However, it is not immediately clear whether the logical meaning necessary to model logical entailment is captured by language models in this paradigm. We examine this pretrain-finetune recipe with language models trained on a synthetic propositional language entailment task, and present results on test sets probing models’ knowledge of axioms of first order logic.
Anthology ID:
2021.naloma-1.4
Volume:
Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA)
Month:
June
Year:
2021
Address:
Groningen, the Netherlands (online)
Editors:
Aikaterini-Lida Kalouli, Lawrence S. Moss
Venue:
NALOMA
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
22–25
Language:
URL:
https://aclanthology.org/2021.naloma-1.4
DOI:
Bibkey:
Cite (ACL):
Aaron Traylor, Ellie Pavlick, and Roman Feiman. 2021. Transferring Representations of Logical Connectives. In Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA), pages 22–25, Groningen, the Netherlands (online). Association for Computational Linguistics.
Cite (Informal):
Transferring Representations of Logical Connectives (Traylor et al., NALOMA 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naloma-1.4.pdf