Brian Wang
2023
Improving Syntactic Probing Correctness and Robustness with Control Tasks
Weicheng Ma
|
Brian Wang
|
Hefan Zhang
|
Lili Wang
|
Rolando Coto-Solano
|
Saeed Hassanpour
|
Soroush Vosoughi
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Syntactic probing methods have been used to examine whether and how pre-trained language models (PLMs) encode syntactic features. However, the probing methods are usually biased by the PLMs’ memorization of common word co-occurrences, even if they do not form syntactic relations. This paper presents a random-word-substitution and random-label-matching control task to reduce these biases and improve the robustness of syntactic probing methods. Our control tasks are also shown to notably improve the consistency of probing results between different probing methods and make the methods more robust with respect to the text attributes of the probing instances. Our control tasks make syntactic probing methods better at reconstructing syntactic features and more generalizable to unseen text domains. Our experiments show that our proposed control tasks are effective on different PLMs, probing methods, and syntactic features.
Deciphering Stereotypes in Pre-Trained Language Models
Weicheng Ma
|
Henry Scheible
|
Brian Wang
|
Goutham Veeramachaneni
|
Pratim Chowdhary
|
Alan Sun
|
Andrew Koulogeorge
|
Lili Wang
|
Diyi Yang
|
Soroush Vosoughi
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Warning: This paper contains content that is stereotypical and may be upsetting. This paper addresses the issue of demographic stereotypes present in Transformer-based pre-trained language models (PLMs) and aims to deepen our understanding of how these biases are encoded in these models. To accomplish this, we introduce an easy-to-use framework for examining the stereotype-encoding behavior of PLMs through a combination of model probing and textual analyses. Our findings reveal that a small subset of attention heads within PLMs are primarily responsible for encoding stereotypes and that stereotypes toward specific minority groups can be identified using attention maps on these attention heads. Leveraging these insights, we propose an attention-head pruning method as a viable approach for debiasing PLMs, without compromising their language modeling capabilities or adversely affecting their performance on downstream tasks.
Search
Co-authors
- Weicheng Ma 2
- Lili Wang 2
- Soroush Vosoughi 2
- Hefan Zhang 1
- Rolando Coto-Solano 1
- show all...