ReadPrompt: A Readable Prompting Method for Reliable Knowledge Probing

Zezhong Wang, Luyao Ye, Hongru Wang, Wai-Chung Kwan, David Ho, Kam-Fai Wong


Abstract
Knowledge probing is a task to assess the knowledge encoded within pre-trained language models (PLMs) by having the PLM complete prompts such as “Italy is located in __,”. The model’s prediction precision serves as a lower bound for the amount of knowledge it contains. Subsequent works explore training a series of vectors as prompts to guide PLMs towards more accurate predictions. However, these methods compromise the readability of the prompts. We cannot directly understand these prompts from their literal meaning, making it difficult to verify whether they are correct. Consequently, the credibility of probing results derived from these prompts is diminished. To address the issue, we propose a novel method called ReadPrompt, which aims to identify meaningful sentences to serve as prompts. Experiments show that ReadPrompt achieves state-of-the-art performance on the current knowledge probing benchmark. Moreover, since the prompt is readable, we discovered a misalignment between constructed prompts and knowledge, which is also present in current prompting methods verified by an attack experiment. We claim that the probing outcomes of the current prompting methods are unreliable that overestimate the knowledge contained within PLMs.
Anthology ID:
2023.findings-emnlp.501
Original:
2023.findings-emnlp.501v1
Version 2:
2023.findings-emnlp.501v2
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7468–7479
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.501
DOI:
10.18653/v1/2023.findings-emnlp.501
Bibkey:
Cite (ACL):
Zezhong Wang, Luyao Ye, Hongru Wang, Wai-Chung Kwan, David Ho, and Kam-Fai Wong. 2023. ReadPrompt: A Readable Prompting Method for Reliable Knowledge Probing. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7468–7479, Singapore. Association for Computational Linguistics.
Cite (Informal):
ReadPrompt: A Readable Prompting Method for Reliable Knowledge Probing (Wang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.501.pdf