Critical Thinking for Language Models

Gregor Betz, Christian Voigt, Kyle Richardson


Abstract
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic corpus of deductively valid arguments, and generate artificial argumentative texts to train CRiPT: a critical thinking intermediarily pre-trained transformer based on GPT-2. Significant transfer learning effects can be observed: Trained on three simple core schemes, CRiPT accurately completes conclusions of different, and more complex types of arguments, too. CRiPT generalizes the core argument schemes in a correct way. Moreover, we obtain consistent and promising results for NLU benchmarks. In particular, CRiPT’s zero-shot accuracy on the GLUE diagnostics exceeds GPT-2’s performance by 15 percentage points. The findings suggest that intermediary pre-training on texts that exemplify basic reasoning abilities (such as typically covered in critical thinking textbooks) might help language models to acquire a broad range of reasoning skills. The synthetic argumentative texts presented in this paper are a promising starting point for building such a “critical thinking curriculum for language models.”
Anthology ID:
2021.iwcs-1.7
Volume:
Proceedings of the 14th International Conference on Computational Semantics (IWCS)
Month:
June
Year:
2021
Address:
Groningen, The Netherlands (online)
Editors:
Sina Zarrieß, Johan Bos, Rik van Noord, Lasha Abzianidze
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
63–75
Language:
URL:
https://aclanthology.org/2021.iwcs-1.7
DOI:
Bibkey:
Cite (ACL):
Gregor Betz, Christian Voigt, and Kyle Richardson. 2021. Critical Thinking for Language Models. In Proceedings of the 14th International Conference on Computational Semantics (IWCS), pages 63–75, Groningen, The Netherlands (online). Association for Computational Linguistics.
Cite (Informal):
Critical Thinking for Language Models (Betz et al., IWCS 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.iwcs-1.7.pdf
Code
 debatelab/aacorpus
Data
GLUELogiQASNLI