LiST: Lite Prompted Self-training Makes Parameter-efficient Few-shot Learners

Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Awadallah, Jianfeng Gao


Abstract
We present a new method LiST for efficient fine-tuning of large pre-trained language models (PLMs) in few-shot learning settings. LiST improves over recent methods that adopt prompt-based fine-tuning (FN) using two key techniques. The first is the use of self-training to leverage large amounts of unlabeled data for prompt-based FN in few-shot settings. We use self-training in conjunction with meta-learning for re-weighting noisy pseudo-prompt labels. Traditionally, self-training is expensive as it requires updating all the model parameters repetitively. Therefore, we use a second technique for light-weight fine-tuning where we introduce a small number of task-specific parameters that are fine-tuned during self-training while keeping the PLM encoder frozen. Our experiments show that LiST can effectively leverage unlabeled data to improve the model performance for few-shot learning. Additionally, the finetuning process is efficient as it only updates a small percentage of the parameters and the overall model footprint is reduced since several tasks can share a common PLM encoder as backbone. We present a comprehensive study on six NLU tasks to validate the effectiveness of LiST. The results show that LiST improves by 35% over classic fine-tuning methods and 6% over prompt-based FN with 96% reduction in number of trainable parameters when fine-tuned with no more than 30 labeled examples from each task. With only 14M tunable parameters, LiST outperforms GPT-3 in-context learning by 33% on few-shot NLU tasks
Anthology ID:
2022.findings-naacl.174
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2262–2281
Language:
URL:
https://aclanthology.org/2022.findings-naacl.174
DOI:
10.18653/v1/2022.findings-naacl.174
Bibkey:
Cite (ACL):
Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Awadallah, and Jianfeng Gao. 2022. LiST: Lite Prompted Self-training Makes Parameter-efficient Few-shot Learners. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2262–2281, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
LiST: Lite Prompted Self-training Makes Parameter-efficient Few-shot Learners (Wang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.174.pdf
Software:
 2022.findings-naacl.174.software.zip
Code
 microsoft/list
Data
GLUEMPQA Opinion CorpusMultiNLISSTSST-2