Nevan Giuliani
2024
CAVA: A Tool for Cultural Alignment Visualization & Analysis
Nevan Giuliani
|
Cheng Charles Ma
|
Prakruthi Pradeep
|
Daphne Ippolito
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
It is well-known that language models are biased; they have patchy knowledge of countries and cultures that are poorly represented in their training data. We introduce CAVA, a visualization tool for identifying and analyzing country-specific biases in language models.Our tool allows users to identify whether a language model successful captures the perspectives of people of different nationalities. The tool supports analysis of both longform and multiple-choice models responses and comparisons between models.Our open-source code easily allows users to upload any country-based language model generations they wish to analyze.To showcase CAVA’s efficacy, we present a case study analyzing how several popular language models answer survey questions from the World Values Survey.
Search