Kush Dubey


2024

pdf bib
Evaluating the fairness of task-adaptive pretraining on unlabeled test data before few-shot text classification
Kush Dubey
Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP

Few-shot learning benchmarks are critical for evaluating modern NLP techniques. It is possible, however, that benchmarks favor methods which easily make use of unlabeled text, because researchers can use unlabeled text from the test set to pretrain their models. Given the dearth of research on this potential problem, we run experiments to quantify the bias caused by pretraining on unlabeled test set text instead of on unlabeled, independently drawn text. Controlled few-shot and zero-shot experiments on 25 classification tasks and 3 language models—BERT, GPT-2, and Mistral 7B—do not find evidence of overoptimism. Furthermore, we demonstrate the importance of repeated subsampling when studying few-shot text classification, and recommend that few-shot learning benchmarks include multiple training folds. Code and data are available here: https://github.com (currently omitted for anonymity).
Search
Co-authors
    Venues