Two Examples are Better than One: Context Regularization for Gradient-based Prompt Tuning

Hyeonmin Ha, Soyoung Jung, Jinsol Park, Minjoon Seo, Seung-won Hwang, Byung-Gon Chun


Abstract
Prompting has gained tremendous attention as an efficient method for the adaptation of large-scale language models. However, prompts often act against human intuition and report unstable performances, which has motivated methods that automatically find effective prompts. One popular approach is gradient-based search, which iteratively updates a (randomly) initialized prompt towards the optimal one with the guide of gradients. We propose a novel regularization method, CoRe, for gradient-based prompt tuning techniques, which guides a prompt to produce a task context properly. CoRe realizes two regularization effects — context attuning and context filtering — that improve prediction performance in a zero-shot in-context learning setting where a model makes inferences only with the prompt tuned by CoRe, without any demonstration examples for in-context learning. Context attuning guides the context generated by the input and the tuned prompt toward embedding the appropriate context for the task. In our theoretical analysis, regularizing the context extends to improving zero-shot in-context learning performance. Context filtering steers the prompt to select only the task-related context so that context attuning solely focuses on creating and sending the right task context. We evaluate CoRe on natural language understanding datasets and two large language models, GPT2-XL and GPT-J.Our training scheme shows performance improvements up to 11.9% on GPT2-XL, and up to 6.3% on GPT-J in zero-shot settings.
Anthology ID:
2023.findings-acl.206
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3335–3350
Language:
URL:
https://aclanthology.org/2023.findings-acl.206
DOI:
10.18653/v1/2023.findings-acl.206
Bibkey:
Cite (ACL):
Hyeonmin Ha, Soyoung Jung, Jinsol Park, Minjoon Seo, Seung-won Hwang, and Byung-Gon Chun. 2023. Two Examples are Better than One: Context Regularization for Gradient-based Prompt Tuning. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3335–3350, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Two Examples are Better than One: Context Regularization for Gradient-based Prompt Tuning (Ha et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.206.pdf