Improving and Simplifying Pattern Exploiting Training

Derek Tam, Rakesh R. Menon, Mohit Bansal, Shashank Srivastava, Colin Raffel


Abstract
Recently, pre-trained language models (LMs) have achieved strong performance when fine-tuned on difficult benchmarks like SuperGLUE. However, performance can suffer when there are very few labeled examples available for fine-tuning. Pattern Exploiting Training (PET) is a recent approach that leverages patterns for few-shot learning. However, PET uses task-specific unlabeled data. In this paper, we focus on few-shot learning without any unlabeled data and introduce ADAPET, which modifies PET’s objective to provide denser supervision during fine-tuning. As a result, ADAPET outperforms PET on SuperGLUE without any task-specific unlabeled data.
Anthology ID:
2021.emnlp-main.407
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4980–4991
Language:
URL:
https://aclanthology.org/2021.emnlp-main.407
DOI:
10.18653/v1/2021.emnlp-main.407
Bibkey:
Cite (ACL):
Derek Tam, Rakesh R. Menon, Mohit Bansal, Shashank Srivastava, and Colin Raffel. 2021. Improving and Simplifying Pattern Exploiting Training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4980–4991, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Improving and Simplifying Pattern Exploiting Training (Tam et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.407.pdf
Software:
 2021.emnlp-main.407.Software.zip
Video:
 https://aclanthology.org/2021.emnlp-main.407.mp4
Code
 rrmenon10/ADAPET +  additional community code
Data
BoolQCOPAMultiRCReCoRDSuperGLUEWSCWiC