Few-shot Unified Question Answering: Tuning Models or Prompts?

Srijan Bansal, Semih Yavuz, Bo Pang, Meghana Bhat, Yingbo Zhou


Abstract
Question-answering (QA) tasks often investigate specific question types, knowledge domains, or reasoning skills, leading to specialized models catering to specific categories of QA tasks. While recent research has explored the idea of unified QA models, such models are usually explored for high-resource scenarios and require re-training to extend their capabilities. To overcome these drawbacks, the paper explores the potential of two paradigms of tuning, model, and prompts, for unified QA under a low-resource setting. The paper provides an exhaustive analysis of their applicability using 16 QA datasets, revealing that prompt tuning can perform as well as model tuning in a few-shot setting with a good initialization. The study also shows that parameter-sharing results in superior few-shot performance, simple knowledge transfer techniques for prompt initialization can be effective, and prompt tuning achieves a significant performance boost from pre-training in a low-resource regime. The research offers insights into the advantages and limitations of prompt tuning for unified QA in a few-shot setting, contributing to the development of effective and efficient systems in low-resource scenarios.
Anthology ID:
2023.findings-emnlp.550
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8200–8220
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.550
DOI:
10.18653/v1/2023.findings-emnlp.550
Bibkey:
Cite (ACL):
Srijan Bansal, Semih Yavuz, Bo Pang, Meghana Bhat, and Yingbo Zhou. 2023. Few-shot Unified Question Answering: Tuning Models or Prompts?. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8200–8220, Singapore. Association for Computational Linguistics.
Cite (Informal):
Few-shot Unified Question Answering: Tuning Models or Prompts? (Bansal et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.550.pdf