Dongyuan Lu
2023
Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction
Yuhang Wang
|
Dongyuan Lu
|
Chao Kong
|
Jitao Sang
Findings of the Association for Computational Linguistics: ACL 2023
Many works employed prompt tuning methods to automatically optimize prompt queries and extract the factual knowledge stored in Pre-trained Language Models. In this paper, we observe that the optimized prompts, including discrete prompts and continuous prompts, exhibit undesirable object bias. To handle this problem, we propose a novel prompt tuning method called MeCoD consisting of three modules: Prompt Encoder, Object Equalization and Biased Object Obstruction. Experimental results show that MeCoD can significantly reduce the object bias and at the same time improve accuracy of factual knowledge extraction.