Visual Prompt Tuning for Few-Shot Text Classification

Jingyuan Wen, Yutian Luo, Nanyi Fei, Guoxing Yang, Zhiwu Lu, Hao Jiang, Jie Jiang, Zhao Cao


Abstract
Deploying large-scale pre-trained models in the prompt-tuning paradigm has demonstrated promising performance in few-shot learning. Particularly, vision-language pre-training models (VL-PTMs) have been intensively explored in various few-shot downstream tasks. However, most existing works only apply VL-PTMs to visual tasks like image classification, with few attempts being made on language tasks like text classification. In few-shot text classification, a feasible paradigm for deploying VL-PTMs is to align the input samples and their category names via the text encoders. However, it leads to the waste of visual information learned by the image encoders of VL-PTMs. To overcome this drawback, we propose a novel method named Visual Prompt Tuning (VPT). To our best knowledge, this method is the first attempt to deploy VL-PTM in few-shot text classification task. The main idea is to generate the image embeddings w.r.t. category names as visual prompt and then add them to the aligning process. Extensive experiments show that our VPT can achieve significant improvements under both zero-shot and few-shot settings. Importantly, our VPT even outperforms the most recent prompt-tuning methods on five public text classification datasets.
Anthology ID:
2022.coling-1.492
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5560–5570
Language:
URL:
https://aclanthology.org/2022.coling-1.492
DOI:
Bibkey:
Cite (ACL):
Jingyuan Wen, Yutian Luo, Nanyi Fei, Guoxing Yang, Zhiwu Lu, Hao Jiang, Jie Jiang, and Zhao Cao. 2022. Visual Prompt Tuning for Few-Shot Text Classification. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5560–5570, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Visual Prompt Tuning for Few-Shot Text Classification (Wen et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.492.pdf