Fine-tuning after Prompting: an Explainable Way for Classification

Zezhong Wang, Luyao Ye, Hongru Wang, Boyang Xue, Yiming Du, Bin Liang, Kam-Fai Wong


Abstract
Prompting is an alternative approach for utilizing pre-trained language models (PLMs) in classification tasks. In contrast to fine-tuning, prompting is more understandable for humans because it utilizes natural language to interact with the PLM, but it often falls short in terms of accuracy. While current research primarily focuses on enhancing the performance of prompting methods to compete with fine-tuning, we believe that these two approaches are not mutually exclusive, each having its strengths and weaknesses. In our study, we depart from the competitive view of prompting versus fine-tuning and instead combine them, introducing a novel method called F&P. This approach enables us to harness the advantages of Fine-tuning for accuracy and the explainability of Prompting simultaneously. Specifically, we reformulate the sample into a prompt and subsequently fine-tune a linear classifier on top of the PLM. Following this, we extract verbalizers according to the weight of this classifier. During the inference phase, we reformulate the sample in the same way and query the PLM. The PLM generates a word, which is then subject to a dictionary lookup by the verbalizer to obtain the prediction. Experiments show that keeping only 30 keywords for each class can achieve comparable performance as fine-tuning. On the other hand, both the prompt and verbalizers are constructed in natural language, making them fully understandable to humans. Hence, the F&P method offers an effective and transparent way to employ a PLM for classification tasks.
Anthology ID:
2024.sighan-1.16
Volume:
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Kam-Fai Wong, Min Zhang, Ruifeng Xu, Jing Li, Zhongyu Wei, Lin Gui, Bin Liang, Runcong Zhao
Venues:
SIGHAN | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
133–142
Language:
URL:
https://aclanthology.org/2024.sighan-1.16
DOI:
Bibkey:
Cite (ACL):
Zezhong Wang, Luyao Ye, Hongru Wang, Boyang Xue, Yiming Du, Bin Liang, and Kam-Fai Wong. 2024. Fine-tuning after Prompting: an Explainable Way for Classification. In Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10), pages 133–142, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Fine-tuning after Prompting: an Explainable Way for Classification (Wang et al., SIGHAN-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.sighan-1.16.pdf