Logical Transformers: Infusing Logical Structures into Pre-Trained Language Models

Borui Wang, Qiuyuan Huang, Budhaditya Deb, Aaron Halfaker, Liqun Shao, Daniel McDuff, Ahmed Hassan Awadallah, Dragomir Radev, Jianfeng Gao


Abstract
Natural language contains rich logical structures and logical information, and correctly detecting and accurately understanding these logical structures and information underlying natural language texts is very crucial for NLP models’ performance on many important NLU and NLG tasks. Existing pre-trained language models based on the transformer architecture mostly adopt a classical design for constructing their input embeddings that ignores the logical structures underlying natural language texts, thus limiting their ability to better capture and encode key logical information in the input sequences. To overcome such limitations, in this paper we first propose a novel approach to construct logic-aware input embeddings for transformer language models through a combination of logic detection, logic mapping and hierarchical logical projections, and then develop a corresponding new modeling paradigm that can upgrade existing transformer language models into logical transformers to boost their performance on different NLU and NLG tasks. Our empirical experiments on four important and challenging NLU and NLG tasks demonstrate that our proposed logical transformer language models can achieve superior performance over their baseline transformer models through a deeper understanding of the logical structures of texts.
Anthology ID:
2023.findings-acl.111
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1762–1773
Language:
URL:
https://aclanthology.org/2023.findings-acl.111
DOI:
10.18653/v1/2023.findings-acl.111
Bibkey:
Cite (ACL):
Borui Wang, Qiuyuan Huang, Budhaditya Deb, Aaron Halfaker, Liqun Shao, Daniel McDuff, Ahmed Hassan Awadallah, Dragomir Radev, and Jianfeng Gao. 2023. Logical Transformers: Infusing Logical Structures into Pre-Trained Language Models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1762–1773, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Logical Transformers: Infusing Logical Structures into Pre-Trained Language Models (Wang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.111.pdf