Eleonora Presani
2023
ROBBIE: Robust Bias Evaluation of Large Generative Language Models
David Esiobu
|
Xiaoqing Tan
|
Saghar Hosseini
|
Megan Ung
|
Yuchen Zhang
|
Jude Fernandes
|
Jane Dwivedi-Yu
|
Eleonora Presani
|
Adina Williams
|
Eric Smith
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
As generative large language models (LLMs) grow more performant and prevalent, we must develop comprehensive enough tools to measure and improve their fairness. Different prompt-based datasets can be used to measure social bias across multiple text domains and demographic axes, meaning that testing LLMs on more datasets can potentially help us characterize their biases more fully, and better ensure equal and equitable treatment of marginalized demographic groups. In this work, our focus is two-fold: (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity metrics across 12 demographic axes and 5 families of generative LLMs. Out of those 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in the paper. The comparison of those benchmarks gives us insights about the bias and toxicity of the compared models. Therefore, we explore the frequency of demographic terms in common LLM pre-training corpora and how this may relate to model biases. (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity mitigation techniques perform across our suite of measurements. ROBBIE aims to provide insights for practitioners while deploying a model, emphasizing the need to not only measure potential harms, but also understand how they arise by characterizing the data, mitigate harms once found, and balance any trade-offs. We open-source our analysis code in hopes of encouraging broader measurements of bias in future LLMs.
2022
“I’m sorry to hear that”: Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
|
Melissa Hall
|
Melanie Kambadur
|
Eleonora Presani
|
Adina Williams
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
As language models grow in popularity, it becomes increasingly important to clearly measure all possible markers of demographic identity in order to avoid perpetuating existing societal harms. Many datasets for measuring bias currently exist, but they are restricted in their coverage of demographic axes and are commonly used with preset bias tests that presuppose which types of biases models can exhibit. In this work, we present a new, more inclusive bias measurement dataset, HolisticBias, which includes nearly 600 descriptor terms across 13 different demographic axes. HolisticBias was assembled in a participatory process including experts and community members with lived experience of these terms. These descriptors combine with a set of bias measurement templates to produce over 450,000 unique sentence prompts, which we use to explore, identify, and reduce novel forms of bias in several generative models. We demonstrate that HolisticBias is effective at measuring previously undetectable biases in token likelihoods from language models, as well as in an offensiveness classifier. We will invite additions and amendments to the dataset, which we hope will serve as a basis for more easy-to-use and standardized methods for evaluating bias in NLP models.
Search
Co-authors
- Adina Williams 2
- David Esiobu 1
- Xiaoqing Tan 1
- Saghar Hosseini 1
- Megan Ung 1
- show all...