IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning

Zihang Xu, Ziqing Yang, Yiming Cui, Shijin Wang


Abstract
In the field of machine reading comprehension (MRC), existing systems have surpassed the average performance of human beings in many tasks like SQuAD. However, there is still a long way to go when it comes to logical reasoning. Although some methods for it have been put forward, they either are designed in a quite complicated way or rely too much on external structures. In this paper, we proposed IDOL (InDicator-Oriented Logic Pre-training), an easy-to-understand but highly effective further pre-training task which logically strengthens the pre-trained models with the help of 6 types of logical indicators and a logically rich dataset LoGic Pre-training (LGP). IDOL achieves state-of-the-art performance on ReClor and LogiQA, the two most representative benchmarks in logical reasoning MRC, and is proven to be capable of generalizing to different pre-trained models and other types of MRC benchmarks like RACE and SQuAD 2.0 while keeping competitive general language understanding ability through testing on tasks in GLUE. Besides, at the beginning of the era of large language models, we take several of them like ChatGPT into comparison and find that IDOL still shows its advantage.
Anthology ID:
2023.findings-acl.513
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8099–8111
Language:
URL:
https://aclanthology.org/2023.findings-acl.513
DOI:
10.18653/v1/2023.findings-acl.513
Bibkey:
Cite (ACL):
Zihang Xu, Ziqing Yang, Yiming Cui, and Shijin Wang. 2023. IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8099–8111, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning (Xu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.513.pdf