AutoEQA: Auto-Encoding Questions for Extractive Question Answering

Stalin Varanasi, Saadullah Amin, Guenter Neumann


Abstract
There has been a significant progress in the field of Extractive Question Answering (EQA) in the recent years. However, most of them are reliant on annotations of answer-spans in the corresponding passages. In this work, we address the problem of EQA when no annotations are present for the answer span, i.e., when the dataset contains only questions and corresponding passages. Our method is based on auto-encoding of the question that performs a question answering task during encoding and a question generation task during decoding. We show that our method performs well in a zero-shot setting and can provide an additional loss to boost performance for EQA.
Anthology ID:
2021.findings-emnlp.403
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4706–4712
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.403
DOI:
10.18653/v1/2021.findings-emnlp.403
Bibkey:
Cite (ACL):
Stalin Varanasi, Saadullah Amin, and Guenter Neumann. 2021. AutoEQA: Auto-Encoding Questions for Extractive Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4706–4712, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
AutoEQA: Auto-Encoding Questions for Extractive Question Answering (Varanasi et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.403.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.403.mp4
Data
SQuAD