Adapting Pre-trained Generative Models for Extractive Question Answering

Prabir Mallick, Tapas Nayak, Indrajit Bhattacharya


Abstract
Pre-trained Generative models such as BART, T5, etc. have gained prominence as a preferred method for text generation in various natural language processing tasks, including abstractive long-form question answering (QA) and summarization. However, the potential of generative models in extractive QA tasks, where discriminative models are commonly employed, remains largely unexplored. Discriminative models often encounter challenges associated with label sparsity, particularly when only a small portion of the context contains the answer. The challenge is more pronounced for multi-span answers. In this work, we introduce a novel approach that uses the power of pre-trained generative models to address extractive QA tasks by generating indexes corresponding to context tokens or sentences that form part of the answer. Through comprehensive evaluations on multiple extractive QA datasets, including MultiSpanQA, BioASQ, MASHQA, and WikiQA, we demonstrate the superior performance of our proposed approach compared to existing state-of-the-art models.
Anthology ID:
2023.gem-1.11
Volume:
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Sebastian Gehrmann, Alex Wang, João Sedoc, Elizabeth Clark, Kaustubh Dhole, Khyathi Raghavi Chandu, Enrico Santus, Hooman Sedghamiz
Venues:
GEM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
128–137
Language:
URL:
https://aclanthology.org/2023.gem-1.11
DOI:
Bibkey:
Cite (ACL):
Prabir Mallick, Tapas Nayak, and Indrajit Bhattacharya. 2023. Adapting Pre-trained Generative Models for Extractive Question Answering. In Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 128–137, Singapore. Association for Computational Linguistics.
Cite (Informal):
Adapting Pre-trained Generative Models for Extractive Question Answering (Mallick et al., GEM-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.gem-1.11.pdf