SPT: Learning to Selectively Insert Prompts for Better Prompt Tuning

Wei Zhu, Ming Tan


Abstract
Prompt tuning prepends a soft prompt to the input embeddings or hidden states and only optimizes the prompt to adapt pretrained models (PTMs) to downstream tasks. The previous work manually selects prompt layers which are far from optimal and failed to exploit the potential of prompt tuning. In this work, we propose a novel framework, Selective Prompt Tuning (SPT), that learns to select the proper prompt layers by inserting a prompt controlled by a learnable probabilistic gate at each intermediate layer. We further propose a novel bi-level optimization framework, SPT-DARTS, that can better optimize the learnable gates and improve the final prompt tuning performances of the learned prompt layer settings. We conduct extensive experiments with ten benchmark datasets under the full-data and few-shot scenarios. The results demonstrate that our SPT framework can perform better than the previous state-of-the-art PETuning baselines with comparable or fewer tunable parameters.
Anthology ID:
2023.emnlp-main.727
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11862–11878
Language:
URL:
https://aclanthology.org/2023.emnlp-main.727
DOI:
10.18653/v1/2023.emnlp-main.727
Bibkey:
Cite (ACL):
Wei Zhu and Ming Tan. 2023. SPT: Learning to Selectively Insert Prompts for Better Prompt Tuning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11862–11878, Singapore. Association for Computational Linguistics.
Cite (Informal):
SPT: Learning to Selectively Insert Prompts for Better Prompt Tuning (Zhu & Tan, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.727.pdf