Aakanksha
2024
The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm
Aakanksha
|
Arash Ahmadian
|
Beyza Ermis
|
Seraphina Goldfarb-Tarrant
|
Julia Kreutzer
|
Marzieh Fadaee
|
Sara Hooker
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
A key concern with the concept of *“alignment”* is the implicit question of *“alignment to what?”*. AI systems are increasingly used across the world, yet safety alignment is often focused on homogeneous monolingual settings. Additionally, preference training and safety measures often overfit to harms common in Western-centric datasets. Here, we explore the viability of different alignment approaches when balancing dual objectives: addressing and optimizing for a non-homogeneous set of languages and cultural preferences while minimizing both global and local harms. We collect the first human annotated red teaming prompts in different languages, distinguishing between global and local harm, which serve as a laboratory to understand the reliability of alignment techniques when faced with preference distributions that are non-stationary across geographies and languages. While this setting is seldom covered by the literature to date, which primarily centers on English harm mitigation, it captures real-world interactions with AI systems around the world. We establish a new precedent for state-of-the-art alignment techniques across 6 languages with minimal degradation in general performance. Our work provides important insights into cross-lingual transfer and novel optimization approaches to safeguard AI systems designed to serve global populations.
Search
Co-authors
- Arash Ahmadian 1
- Beyza Ermis 1
- Seraphina Goldfarb-Tarrant 1
- Julia Kreutzer 1
- Marzieh Fadaee 1
- show all...