True Few-Shot Learning with Prompts—A Real-World Perspective

Timo Schick, Hinrich Schütze


Abstract
Prompt-based approaches excel at few-shot learning. However, Perez et al. (2021) recently cast doubt on their performance as they had difficulty getting good results in a “true” few-shot setting in which prompts and hyperparameters cannot be tuned on a dev set. In view of this, we conduct an extensive study of Pet, a method that combines textual instructions with example-based finetuning. We show that, if correctly configured, Pet performs strongly in true few-shot settings without a dev set. Crucial for this strong performance is a number of design choices, including Pet’s ability to intelligently handle multiple prompts. We put our findings to a real-world test by running Pet on RAFT, a benchmark of tasks taken from realistic NLP applications for which no labeled dev or test sets are available. Pet achieves a new state of the art on RAFT and performs close to non-expert humans for 7 out of 11 tasks. These results demonstrate that prompt-based learners can successfully be applied in true few-shot settings and underpin our belief that learning from instructions will play an important role on the path towards human-like few-shot learning capabilities.
Anthology ID:
2022.tacl-1.41
Volume:
Transactions of the Association for Computational Linguistics, Volume 10
Month:
Year:
2022
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
716–731
Language:
URL:
https://aclanthology.org/2022.tacl-1.41
DOI:
10.1162/tacl_a_00485
Bibkey:
Cite (ACL):
Timo Schick and Hinrich Schütze. 2022. True Few-Shot Learning with Prompts—A Real-World Perspective. Transactions of the Association for Computational Linguistics, 10:716–731.
Cite (Informal):
True Few-Shot Learning with Prompts—A Real-World Perspective (Schick & Schütze, TACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.tacl-1.41.pdf