An Empirical Study on Pseudo-log-likelihood Bias Measures for Masked Language Models Using Paraphrased Sentences

Bum Chul Kwon, Nandana Mihindukulasooriya


Abstract
In this paper, we conduct an empirical study on a bias measure, log-likelihood Masked Language Model (MLM) scoring, on a benchmark dataset. Previous work evaluates whether MLMs are biased or not for certain protected attributes (e.g., race) by comparing the log-likelihood scores of sentences that contain stereotypical characteristics with one category (e.g., black) versus another (e.g., white). We hypothesized that this approach might be too sensitive to the choice of contextual words than the meaning of the sentence. Therefore, we computed the same measure after paraphrasing the sentences with different words but with same meaning. Our results demonstrate that the log-likelihood scoring can be more sensitive to utterance of specific words than to meaning behind a given sentence. Our paper reveals a shortcoming of the current log-likelihood-based bias measures for MLMs and calls for new ways to improve the robustness of it
Anthology ID:
2022.trustnlp-1.7
Volume:
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
Month:
July
Year:
2022
Address:
Seattle, U.S.A.
Editors:
Apurv Verma, Yada Pruksachatkun, Kai-Wei Chang, Aram Galstyan, Jwala Dhamala, Yang Trista Cao
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
74–79
Language:
URL:
https://aclanthology.org/2022.trustnlp-1.7
DOI:
10.18653/v1/2022.trustnlp-1.7
Bibkey:
Cite (ACL):
Bum Chul Kwon and Nandana Mihindukulasooriya. 2022. An Empirical Study on Pseudo-log-likelihood Bias Measures for Masked Language Models Using Paraphrased Sentences. In Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), pages 74–79, Seattle, U.S.A.. Association for Computational Linguistics.
Cite (Informal):
An Empirical Study on Pseudo-log-likelihood Bias Measures for Masked Language Models Using Paraphrased Sentences (Kwon & Mihindukulasooriya, TrustNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.trustnlp-1.7.pdf
Video:
 https://aclanthology.org/2022.trustnlp-1.7.mp4
Data
CrowS-Pairs