How Does In-Context Learning Help Prompt Tuning?

Simeng Sun, Yang Liu, Dan Iter, Chenguang Zhu, Mohit Iyyer


Abstract
Fine-tuning large language models is becoming ever more impractical due to their rapidly-growing scale. This motivates the use of parameter-efficient adaptation methods such as prompt tuning (PT), which adds a small number of tunable embeddings to an otherwise frozen model, and in-context learning (ICL), in which demonstrations of the task are provided to the model in natural language without any additional training. Recently, (CITATION) propose “instruction prompt tuning” (IPT), which combines PT with ICL by concatenating a natural language demonstration with learned prompt embeddings. While all of these methods have proven effective on different tasks, how they interact with each other remains unexplored. In this paper, we empirically study when and how in-context examples improve prompt tuning by measuring the effectiveness of ICL, PT, and IPT on five text generation tasks with multiple base language models. We observe that (1) IPT does not always outperform PT, and in fact requires the in-context demonstration to be semantically similar to the test input to yield improvements; (2) PT is unstable and exhibits high variance, but combining PT and ICL (into IPT) consistently reduces variance across all five tasks; and(3) prompts learned for a specific source task via PT exhibit positive transfer when paired with in-context examples of a different target task. Our results offer actionable insights on choosing a suitable parameter-efficient adaptation method for a given task.
Anthology ID:
2024.findings-eacl.11
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
156–165
Language:
URL:
https://aclanthology.org/2024.findings-eacl.11
DOI:
Bibkey:
Cite (ACL):
Simeng Sun, Yang Liu, Dan Iter, Chenguang Zhu, and Mohit Iyyer. 2024. How Does In-Context Learning Help Prompt Tuning?. In Findings of the Association for Computational Linguistics: EACL 2024, pages 156–165, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
How Does In-Context Learning Help Prompt Tuning? (Sun et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.11.pdf
Video:
 https://aclanthology.org/2024.findings-eacl.11.mp4