FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models

Rakesh Chada, Pradeep Natarajan


Abstract
The task of learning from only a few examples (called a few-shot setting) is of key importance and relevance to a real-world setting. For question answering (QA), the current state-of-the-art pre-trained models typically need fine-tuning on tens of thousands of examples to obtain good results. Their performance degrades significantly in a few-shot setting (< 100 examples). To address this, we propose a simple fine-tuning framework that leverages pre-trained text-to-text models and is directly aligned with their pre-training framework. Specifically, we construct the input as a concatenation of the question, a mask token representing the answer span and a context. Given this input, the model is fine-tuned using the same objective as that of its pre-training objective. Through experimental studies on various few-shot configurations, we show that this formulation leads to significant gains on multiple QA benchmarks (an absolute gain of 34.2 F1 points on average when there are only 16 training examples). The gains extend further when used with larger models (Eg:- 72.3 F1 on SQuAD using BART-large with only 32 examples) and translate well to a multilingual setting . On the multilingual TydiQA benchmark, our model outperforms the XLM-Roberta-large by an absolute margin of upto 40 F1 points and an average of 33 F1 points in a few-shot setting (<= 64 training examples). We conduct detailed ablation studies to analyze factors contributing to these gains.
Anthology ID:
2021.emnlp-main.491
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6081–6090
Language:
URL:
https://aclanthology.org/2021.emnlp-main.491
DOI:
10.18653/v1/2021.emnlp-main.491
Bibkey:
Cite (ACL):
Rakesh Chada and Pradeep Natarajan. 2021. FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6081–6090, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models (Chada & Natarajan, EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.491.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.491.mp4
Data
MRQASQuADTyDiQA