Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering

Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig


Abstract
Generative question answering (QA) models generate answers to questions either solely based on the parameters of the model (the closed-book setting) or additionally retrieving relevant evidence (the open-book setting). Generative QA models can answer some relatively complex questions, but the mechanism through which they do so is still poorly understood. We perform several studies aimed at better understanding the multi-hop reasoning capabilities of generative QA models. First, we decompose multi-hop questions into multiple corresponding single-hop questions, and find marked inconsistency in QA models’ answers on these pairs of ostensibly identical question chains. Second, we find that models lack zero-shot multi-hop reasoning ability: when trained only on single-hop questions, models generalize poorly to multi-hop questions. Finally, we demonstrate that it is possible to improve models’ zero-shot multi-hop reasoning capacity through two methods that approximate real multi-hop natural language (NL) questions by training on either concatenation of single-hop questions or logical forms (SPARQL). In sum, these results demonstrate that multi-hop reasoning does not emerge naturally in generative QA models, but can be encouraged by advances in training or modeling techniques. Code is available at https://github.com/jzbjyb/multihop.
Anthology ID:
2022.coling-1.152
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
1765–1775
Language:
URL:
https://aclanthology.org/2022.coling-1.152
DOI:
Bibkey:
Cite (ACL):
Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2022. Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1765–1775, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering (Jiang et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.152.pdf
Data
ComplexWebQuestionsDROPHotpotQAWebQuestionsSP