Encoding Explanatory Knowledge for Zero-shot Science Question Answering

Zili Zhou, Marco Valentino, Donal Landers, André Freitas


Abstract
This paper describes N-XKT (Neural encoding based on eXplanatory Knowledge Transfer), a novel method for the automatic transfer of explanatory knowledge through neural encoding mechanisms. We demonstrate that N-XKT is able to improve accuracy and generalization on science Question Answering (QA). Specifically, by leveraging facts from background explanatory knowledge corpora, the N-XKT model shows a clear improvement on zero-shot QA. Furthermore, we show that N-XKT can be fine-tuned on a target QA dataset, enabling faster convergence and more accurate results. A systematic analysis is conducted to quantitatively analyze the performance of the N-XKT model and the impact of different categories of knowledge on the zero-shot generalization task.
Anthology ID:
2021.iwcs-1.5
Volume:
Proceedings of the 14th International Conference on Computational Semantics (IWCS)
Month:
June
Year:
2021
Address:
Groningen, The Netherlands (online)
Editors:
Sina Zarrieß, Johan Bos, Rik van Noord, Lasha Abzianidze
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
38–50
Language:
URL:
https://aclanthology.org/2021.iwcs-1.5
DOI:
Bibkey:
Cite (ACL):
Zili Zhou, Marco Valentino, Donal Landers, and André Freitas. 2021. Encoding Explanatory Knowledge for Zero-shot Science Question Answering. In Proceedings of the 14th International Conference on Computational Semantics (IWCS), pages 38–50, Groningen, The Netherlands (online). Association for Computational Linguistics.
Cite (Informal):
Encoding Explanatory Knowledge for Zero-shot Science Question Answering (Zhou et al., IWCS 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.iwcs-1.5.pdf
Data
ARC (AI2 Reasoning Challenge)OpenBookQAWorldtree