The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation

Ernie Chang, Xiaoyu Shen, Alex Marin, Vera Demberg


Abstract
We propose a shared task on training instance selection for few-shot neural text generation. Large-scale pretrained language models have led to dramatic improvements in few-shot text generation. Nonetheless, almost all previous work simply applies random sampling to select the few-shot training instances. Little to no attention has been paid to the selection strategies and how they would affect model performance. Studying the selection strategy can help us (1) make the most use of our annotation budget in downstream tasks and (2) better benchmark few-shot text generative models. We welcome submissions that present their selection strategies and the effects on the generation quality.
Anthology ID:
2021.inlg-1.36
Volume:
Proceedings of the 14th International Conference on Natural Language Generation
Month:
August
Year:
2021
Address:
Aberdeen, Scotland, UK
Editors:
Anya Belz, Angela Fan, Ehud Reiter, Yaji Sripada
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
325–330
Language:
URL:
https://aclanthology.org/2021.inlg-1.36
DOI:
10.18653/v1/2021.inlg-1.36
Bibkey:
Cite (ACL):
Ernie Chang, Xiaoyu Shen, Alex Marin, and Vera Demberg. 2021. The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation. In Proceedings of the 14th International Conference on Natural Language Generation, pages 325–330, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Cite (Informal):
The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation (Chang et al., INLG 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.inlg-1.36.pdf