%0 Conference Proceedings %T Clues Before Answers: Generation-Enhanced Multiple-Choice QA %A Huang, Zixian %A Wu, Ao %A Zhou, Jiaying %A Gu, Yu %A Zhao, Yue %A Cheng, Gong %Y Carpuat, Marine %Y de Marneffe, Marie-Catherine %Y Meza Ruiz, Ivan Vladimir %S Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2022 %8 July %I Association for Computational Linguistics %C Seattle, United States %F huang-etal-2022-clues %X A trending paradigm for multiple-choice question answering (MCQA) is using a text-to-text framework. By unifying data in different tasks into a single text-to-text format, it trains a generative encoder-decoder model which is both powerful and universal. However, a side effect of twisting a generation target to fit the classification nature of MCQA is the under-utilization of the decoder and the knowledge that can be decoded. To exploit the generation capability and underlying knowledge of a pre-trained encoder-decoder model, in this paper, we propose a generation-enhanced MCQA model named GenMC. It generates a clue from the question and then leverages the clue to enhance a reader for MCQA. It outperforms text-to-text models on multiple MCQA datasets. %R 10.18653/v1/2022.naacl-main.239 %U https://aclanthology.org/2022.naacl-main.239 %U https://doi.org/10.18653/v1/2022.naacl-main.239 %P 3272-3287