Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View

Boxi Cao, Hongyu Lin, Xianpei Han, Fangchao Liu, Le Sun


Abstract
Prompt-based probing has been widely used in evaluating the abilities of pretrained language models (PLMs). Unfortunately, recent studies have discovered such an evaluation may be inaccurate, inconsistent and unreliable. Furthermore, the lack of understanding its inner workings, combined with its wide applicability, has the potential to lead to unforeseen risks for evaluating and applying PLMs in real-world applications. To discover, understand and quantify the risks, this paper investigates the prompt-based probing from a causal view, highlights three critical biases which could induce biased results and conclusions, and proposes to conduct debiasing via causal intervention. This paper provides valuable insights for the design of unbiased datasets, better probing frameworks and more reliable evaluations of pretrained language models. Furthermore, our conclusions also echo that we need to rethink the criteria for identifying better pretrained language models.
Anthology ID:
2022.acl-long.398
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5796–5808
Language:
URL:
https://aclanthology.org/2022.acl-long.398
DOI:
10.18653/v1/2022.acl-long.398
Bibkey:
Cite (ACL):
Boxi Cao, Hongyu Lin, Xianpei Han, Fangchao Liu, and Le Sun. 2022. Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5796–5808, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View (Cao et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.398.pdf
Software:
 2022.acl-long.398.software.zip
Code
 c-box/causaleval
Data
BioLAMALAMAWebText