Self-Supervised Knowledge Triplet Learning for Zero-Shot Question Answering

Pratyay Banerjee, Chitta Baral


Abstract
The aim of all Question Answering (QA) systems is to generalize to unseen questions. Current supervised methods are reliant on expensive data annotation. Moreover, such annotations can introduce unintended annotator bias, making systems focus more on the bias than the actual task. This work proposes Knowledge Triplet Learning (KTL), a self-supervised task over knowledge graphs. We propose heuristics to create synthetic graphs for commonsense and scientific knowledge. We propose using KTL to perform zero-shot question answering, and our experiments show considerable improvements over large pre-trained transformer language models.
Anthology ID:
2020.emnlp-main.11
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
151–162
Language:
URL:
https://aclanthology.org/2020.emnlp-main.11
DOI:
10.18653/v1/2020.emnlp-main.11
Bibkey:
Cite (ACL):
Pratyay Banerjee and Chitta Baral. 2020. Self-Supervised Knowledge Triplet Learning for Zero-Shot Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 151–162, Online. Association for Computational Linguistics.
Cite (Informal):
Self-Supervised Knowledge Triplet Learning for Zero-Shot Question Answering (Banerjee & Baral, EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.11.pdf
Optional supplementary material:
 2020.emnlp-main.11.OptionalSupplementaryMaterial.zip
Video:
 https://slideslive.com/38939163
Data
ATOMICCommonsenseQAOpenBookQAQASCStoryClozeWinoGrande