A Few More Examples May Be Worth Billions of Parameters

Yuval Kirstain, Patrick Lewis, Sebastian Riedel, Omer Levy


Abstract
We investigate the dynamics of increasing the number of model parameters versus the number of labeled examples across a wide variety of tasks. Our exploration reveals that while scaling parameters consistently yields performance improvements, the contribution of additional examples highly depends on the task’s format. Specifically, in open question answering tasks, enlarging the training set does not improve performance. In contrast, classification, extractive question answering, and multiple choice tasks benefit so much from additional examples that collecting a few hundred examples is often “worth” billions of parameters. We hypothesize that unlike open question answering, which involves recalling specific information, solving strategies for tasks with a more restricted output space transfer across examples, and can therefore be learned with small amounts of labeled data.
Anthology ID:
2022.findings-emnlp.72
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1017–1029
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.72
DOI:
10.18653/v1/2022.findings-emnlp.72
Bibkey:
Cite (ACL):
Yuval Kirstain, Patrick Lewis, Sebastian Riedel, and Omer Levy. 2022. A Few More Examples May Be Worth Billions of Parameters. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1017–1029, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
A Few More Examples May Be Worth Billions of Parameters (Kirstain et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.72.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.72.mp4