Towards Low-Resource Alignment to Diverse Perspectives with Sparse Feedback

Chu Fei Luo, Samuel Dahan, Xiaodan Zhu


Abstract
As language models have a greater impact on society, it is important to ensure they are aligned to a diverse range of perspectives and are able to reflect nuance in human values. However, the most popular training paradigms for modern language models often assume there is one optimal answer for every query, leading to generic responses and poor alignment. In this work, we aim to enhance pluralistic alignment of language models in a low-resource setting with two methods: pluralistic decoding and model steering. We empirically demonstrate that model steering offers consistent improvement over zero-shot and few-shot baselines with only 50 annotated samples. Our proposed methods decrease false positives in several high-stakes tasks such as hate speech detection and misinformation detection, and improves the distributional alignment to human values from different demographics. We hope our work highlights the importance of diversity and how language models can be adapted to consider nuanced perspectives.
Anthology ID:
2025.findings-emnlp.1106
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20330–20339
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1106/
DOI:
Bibkey:
Cite (ACL):
Chu Fei Luo, Samuel Dahan, and Xiaodan Zhu. 2025. Towards Low-Resource Alignment to Diverse Perspectives with Sparse Feedback. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 20330–20339, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Towards Low-Resource Alignment to Diverse Perspectives with Sparse Feedback (Luo et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1106.pdf
Checklist:
 2025.findings-emnlp.1106.checklist.pdf