Using Natural Sentence Prompts for Understanding Biases in Language Models

Sarah Alnegheimish, Alicia Guo, Yi Sun


Abstract
Evaluation of biases in language models is often limited to synthetically generated datasets. This dependence traces back to the need of prompt-style dataset to trigger specific behaviors of language models. In this paper, we address this gap by creating a prompt dataset with respect to occupations collected from real-world natural sentences present in Wikipedia.We aim to understand the differences between using template-based prompts and natural sentence prompts when studying gender-occupation biases in language models. We find bias evaluations are very sensitiveto the design choices of template prompts, and we propose using natural sentence prompts as a way of more systematically using real-world sentences to move away from design decisions that may bias the results.
Anthology ID:
2022.naacl-main.203
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2824–2830
Language:
URL:
https://aclanthology.org/2022.naacl-main.203
DOI:
10.18653/v1/2022.naacl-main.203
Bibkey:
Cite (ACL):
Sarah Alnegheimish, Alicia Guo, and Yi Sun. 2022. Using Natural Sentence Prompts for Understanding Biases in Language Models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2824–2830, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Using Natural Sentence Prompts for Understanding Biases in Language Models (Alnegheimish et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.203.pdf