%0 Conference Proceedings %T Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations %A Jung, Jaehun %A Qin, Lianhui %A Welleck, Sean %A Brahman, Faeze %A Bhagavatula, Chandra %A Le Bras, Ronan %A Choi, Yejin %Y Goldberg, Yoav %Y Kozareva, Zornitsa %Y Zhang, Yue %S Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing %D 2022 %8 December %I Association for Computational Linguistics %C Abu Dhabi, United Arab Emirates %F jung-etal-2022-maieutic %X Pre-trained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged as a promising direction to amend this. However, these approaches are fundamentally bounded by the correctness of explanations, which themselves are often noisy and inconsistent. In this work, we develop Maieutic Prompting, which aims to infer a correct answer to a question even from the unreliable generations of LM. Maieutic Prompting induces a tree of explanations abductively (e.g. X is true, because ...) and recursively, then frames the inference as a satisfiability problem over these explanations and their logical relations. We test Maieutic Prompting for true/false QA on three challenging benchmarks that require complex commonsense reasoning. Maieutic Prompting achieves up to 20% better accuracy than state-of-the-art prompting methods, and as a fully unsupervised approach, performs competitively with supervised models. We also show that Maieutic Prompting improves robustness in inference while providing interpretable rationales. %R 10.18653/v1/2022.emnlp-main.82 %U https://aclanthology.org/2022.emnlp-main.82 %U https://doi.org/10.18653/v1/2022.emnlp-main.82 %P 1266-1279