Chaoli Zhang


2023

pdf bib
LogiCoT: Logical Chain-of-Thought Instruction Tuning
Hanmeng Liu | Zhiyang Teng | Leyang Cui | Chaoli Zhang | Qiji Zhou | Yue Zhang
Findings of the Association for Computational Linguistics: EMNLP 2023

Generative Pre-trained Transformer 4 (GPT-4) demonstrates impressive chain-of-thought reasoning ability. Recent work on self-instruction tuning, such as Alpaca, has focused on enhancing the general proficiency of models. These instructions enable the model to achieve performance comparable to GPT-3.5 on general tasks like open-domain text generation and paraphrasing. However, they fall short of helping the model handle complex reasoning tasks. To bridge the gap, this paper presents LogiCoT, a new instruction-tuning dataset for Logical Chain-of-Thought reasoning with GPT-4. We elaborate on the process of harvesting instructions for prompting GPT-4 to generate chain-of-thought rationales. LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.