%0 Conference Proceedings %T P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks %A Liu, Xiao %A Ji, Kaixuan %A Fu, Yicheng %A Tam, Weng %A Du, Zhengxiao %A Yang, Zhilin %A Tang, Jie %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F liu-etal-2022-p %X Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work reveals that prompt tuning does not perform well for normal-sized pretrained models. We also find that existing methods of prompt tuning cannot handle hard sequence labeling tasks, indicating a lack of universality. We present a novel empirical finding that properly optimized prompt tuning can be universally effective across a wide range of model scales and NLU tasks. It matches the performance of finetuning while having only 0.1%-3% tuned parameters. Our method P-Tuning v2 is an implementation of Deep Prompt Tuning (CITATION) optimized and adapted for NLU. Given the universality and simplicity of P-Tuning v2, we believe it can serve as an alternative to finetuning and a strong baseline for future research. %R 10.18653/v1/2022.acl-short.8 %U https://aclanthology.org/2022.acl-short.8 %U https://doi.org/10.18653/v1/2022.acl-short.8 %P 61-68