Generative Prompt Tuning for Relation Classification

Jiale Han, Shuai Zhao, Bo Cheng, Shengkun Ma, Wei Lu


Abstract
Using prompts to explore the knowledge contained within pre-trained language models for downstream tasks has now become an active topic. Current prompt tuning methods mostly convert the downstream tasks to masked language modeling problems by adding cloze-style phrases and mapping all labels to verbalizations with fixed length, which has proven effective for tasks with simple label spaces. However, when applied to relation classification exhibiting complex label spaces, vanilla prompt tuning methods may struggle with label verbalizations with arbitrary lengths due to rigid prompt restrictions. Inspired by the text infilling task for pre-training generative models that can flexibly predict missing spans, we propose a novel generative prompt tuning method to reformulate relation classification as an infilling problem, which frees our approach from limitations of current prompt based approaches and thus fully exploits rich semantics of entity and relation types. In addition, we design entity-guided decoding and discriminative relation scoring to generate and align relations effectively and efficiently during inference. Extensive experiments under fully supervised settings and low-resource settings demonstrate the effectiveness of our approach.
Anthology ID:
2022.findings-emnlp.231
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3170–3185
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.231
DOI:
10.18653/v1/2022.findings-emnlp.231
Bibkey:
Cite (ACL):
Jiale Han, Shuai Zhao, Bo Cheng, Shengkun Ma, and Wei Lu. 2022. Generative Prompt Tuning for Relation Classification. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3170–3185, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Generative Prompt Tuning for Relation Classification (Han et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.231.pdf