Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning

Yanhui Guo, Shaoyuan Xu, Jinmiao Fu, Jia Liu, Chaosheng Dong, Bryan Wang


Abstract
This paper introduces Q-tuning, a novel approach for continual prompt tuning that enables the lifelong learning of a pre-trained language model. When learning a new task, Q-tuning trains a task-specific prompt by adding it to a prompt queue consisting of the prompts from older tasks. To better transfer the knowledge of old tasks, we design an adaptive knowledge aggregation technique that reweighs previous prompts in the queue with a learnable low-rank matrix. Once the prompt queue reaches its maximum capacity, we leverage a PCA-based eviction rule to reduce the queue’s size, allowing the newly trained prompt to be added while preserving the primary knowledge of old tasks. In order to mitigate the accumulation of information loss caused by the eviction, we additionally propose a globally shared prefix prompt and a memory retention regularization based on information theory. Extensive experiments demonstrate that our approach outperforms the state-of-the-art methods substantially on continual prompt tuning benchmarks. Moreover, our approach enables lifelong learning on linearly growing task sequences while requiring constant complexity for training and inference.
Anthology ID:
2024.findings-naacl.166
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2595–2622
Language:
URL:
https://aclanthology.org/2024.findings-naacl.166
DOI:
10.18653/v1/2024.findings-naacl.166
Bibkey:
Cite (ACL):
Yanhui Guo, Shaoyuan Xu, Jinmiao Fu, Jia Liu, Chaosheng Dong, and Bryan Wang. 2024. Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2595–2622, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning (Guo et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.166.pdf