Evaluating the fairness of task-adaptive pretraining on unlabeled test data before few-shot text classification

Kush Dubey


Abstract
Few-shot learning benchmarks are critical for evaluating modern NLP techniques. It is possible, however, that benchmarks favor methods which easily make use of unlabeled text, because researchers can use unlabeled text from the test set to pretrain their models. Given the dearth of research on this potential problem, we run experiments to quantify the bias caused by pretraining on unlabeled test set text instead of on unlabeled, independently drawn text. Controlled few-shot and zero-shot experiments on 25 classification tasks and 3 language models—BERT, GPT-2, and Mistral 7B—do not find evidence of overoptimism. Furthermore, we demonstrate the importance of repeated subsampling when studying few-shot text classification, and recommend that few-shot learning benchmarks include multiple training folds. Code and data are available here: https://github.com (currently omitted for anonymity).
Anthology ID:
2024.genbench-1.1
Volume:
Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Dieuwke Hupkes, Verna Dankers, Khuyagbaatar Batsuren, Amirhossein Kazemnejad, Christos Christodoulopoulos, Mario Giulianelli, Ryan Cotterell
Venue:
GenBench
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–26
Language:
URL:
https://aclanthology.org/2024.genbench-1.1
DOI:
Bibkey:
Cite (ACL):
Kush Dubey. 2024. Evaluating the fairness of task-adaptive pretraining on unlabeled test data before few-shot text classification. In Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP, pages 1–26, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Evaluating the fairness of task-adaptive pretraining on unlabeled test data before few-shot text classification (Dubey, GenBench 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.genbench-1.1.pdf