A. Stevie Bergman
2024
STAR: SocioTechnical Approach to Red Teaming Language Models
Laura Weidinger
|
John F J Mellor
|
Bernat Guillén Pegueroles
|
Nahema Marchal
|
Ravin Kumar
|
Kristian Lum
|
Canfer Akbulut
|
Mark Diaz
|
A. Stevie Bergman
|
Mikel D. Rodriguez
|
Verena Rieser
|
William Isaac
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
This research introduces STAR, a sociotechnical framework that improves on current best practices for red teaming safety of large language models. STAR makes two key contributions: it enhances steerability by generating parameterised instructions for human red teamers, leading to improved coverage of the risk surface. Parameterised instructions also provide more detailed insights into model failures at no increased cost. Second, STAR improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations. STAR further employs a novel step of arbitration to leverage diverse viewpoints and improve label reliability, treating disagreement not as noise but as a valuable contribution to signal quality.
2022
Guiding the Release of Safer E2E Conversational AI through Value Sensitive Design
A. Stevie Bergman
|
Gavin Abercrombie
|
Shannon Spruit
|
Dirk Hovy
|
Emily Dinan
|
Y-Lan Boureau
|
Verena Rieser
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Over the last several years, end-to-end neural conversational agents have vastly improved their ability to carry unrestricted, open-domain conversations with humans. However, these models are often trained on large datasets from the Internet and, as a result, may learn undesirable behaviours from this data, such as toxic or otherwise harmful language. Thus, researchers must wrestle with how and when to release these models. In this paper, we survey recent and related work to highlight tensions between values, potential positive impact, and potential harms. We also provide a framework to support practitioners in deciding whether and how to release these models, following the tenets of value-sensitive design.