POE: Process of Elimination for Multiple Choice Reasoning

Chenkai Ma, Xinya Du


Abstract
Language models (LMs) are capable of conducting in-context learning for multiple choice reasoning tasks, but the options in these tasks are treated equally. As humans often first eliminate wrong options before picking the final correct answer, we argue a similar two-step strategy can make LMs better at these tasks. To this end, we present the Process of Elimination (POE), a two-step scoring method. In the first step, POE scores each option, and eliminates seemingly wrong options. In the second step, POE masks these wrong options, and makes the final prediction from the remaining options. Zero-shot experiments on 8 reasoning tasks illustrate the effectiveness of POE, and a following analysis finds our method to be especially performant on logical reasoning tasks. We further analyze the effect of masks, and show that POE applies to few-shot settings and large language models (LLMs) like ChatGPT.
Anthology ID:
2023.emnlp-main.273
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4487–4496
Language:
URL:
https://aclanthology.org/2023.emnlp-main.273
DOI:
10.18653/v1/2023.emnlp-main.273
Bibkey:
Cite (ACL):
Chenkai Ma and Xinya Du. 2023. POE: Process of Elimination for Multiple Choice Reasoning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4487–4496, Singapore. Association for Computational Linguistics.
Cite (Informal):
POE: Process of Elimination for Multiple Choice Reasoning (Ma & Du, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.273.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.273.mp4