Nicholas Blumm


2023

pdf bib
Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting
Preethi Lahoti | Nicholas Blumm | Xiao Ma | Raghavendra Kotikalapudi | Sahitya Potluri | Qijun Tan | Hansa Srinivasan | Ben Packer | Ahmad Beirami | Alex Beutel | Jilin Chen
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

A crucial challenge for generative large language models (LLMs) is diversity: when a user’s prompt is under-specified, models may follow implicit assumptions while generating a response, which may result in homogenization of the responses, as well as certain demographic groups being under-represented or even erased from the generated responses. In this paper, we formalize the problem diversity of representation in LLM generations. We present evaluation datasets and propose metrics to measure diversity in generated responses along people and culture axes. We find that LLMs understand the notion of diversity, and that they can reason and critique their own responses for that goal. This finding motivated a new prompting technique called collective-critique and self-voting (CCSV) to self-improve people diversity of LLMs by tapping into its diversity reasoning capabilities, without relying on handcrafted examples or prompt tuning. Extensive empirical experiments with both human and automated evaluations show that our proposed approach is effective at improving people and culture diversity, and outperforms all baseline methods by a large margin.