%0 Conference Proceedings %T Muppet: Massive Multi-task Representations with Pre-Finetuning %A Aghajanyan, Armen %A Gupta, Anchit %A Shrivastava, Akshat %A Chen, Xilun %A Zettlemoyer, Luke %A Gupta, Sonal %Y Moens, Marie-Francine %Y Huang, Xuanjing %Y Specia, Lucia %Y Yih, Scott Wen-tau %S Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing %D 2021 %8 November %I Association for Computational Linguistics %C Online and Punta Cana, Dominican Republic %F aghajanyan-etal-2021-muppet %X We propose pre-finetuning, an additional large-scale learning stage between language model pre-training and fine-tuning. Pre-finetuning is massively multi-task learning (around 50 datasets, over 4.8 million total labeled examples), and is designed to encourage learning of representations that generalize better to many different tasks. We show that pre-finetuning consistently improves performance for pretrained discriminators (e.g. RoBERTa) and generation models (e.g. BART) on a wide range of tasks (sentence prediction, commonsense reasoning, MRC, etc.), while also significantly improving sample efficiency during fine-tuning. We also show that large-scale multi-tasking is crucial; pre-finetuning can hurt performance when few tasks are used up until a critical point (usually above 15) after which performance improves linearly in the number of tasks. %R 10.18653/v1/2021.emnlp-main.468 %U https://aclanthology.org/2021.emnlp-main.468 %U https://doi.org/10.18653/v1/2021.emnlp-main.468 %P 5799-5811