The discordance between embedded ethics and cultural inference in large language models

Aida Ramezani, Yang Xu


Abstract
Effective interactions between artificial intelligence (AI) and humans require an equitable and accurate representation of diverse cultures. It is known that current AI, particularly large language models (LLMs), possesses some degrees of cultural knowledge but not without limitations. We present a framework aimed at understanding the origin of these limitations. We hypothesize that there is a fundamental discordance between embedded ethics—how LLMs represent right versus wrong, and cultural inference—how LLMs infer cultural knowledge, specifically cultural norms. We demonstrate this by extracting low-dimensional subspaces that embed ethical principles of LLMs based on established benchmarks. We then show that how LLMs make errors in culturally distinctive scenarios significantly correlates with how they represent cultural norms with respect to these embedded ethics subspaces. Furthermore, we show that coercing cultural norms to be more aligned with the embedded ethics increases LLM performance in cultural inference. Our analyses of 12 language models, two large-scale cultural benchmarks spanning 75 countries and two ethical datasets indicate that 1) the ethics-culture discordance tends to be exacerbated in instruct-tuned models, and 2) how current LLMs represent ethics can impose limitations on their adaptation to diverse cultures particularly pertaining to non-Western and low-income regions.
Anthology ID:
2025.emnlp-main.743
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14726–14747
Language:
URL:
https://aclanthology.org/2025.emnlp-main.743/
DOI:
Bibkey:
Cite (ACL):
Aida Ramezani and Yang Xu. 2025. The discordance between embedded ethics and cultural inference in large language models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 14726–14747, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
The discordance between embedded ethics and cultural inference in large language models (Ramezani & Xu, EMNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.emnlp-main.743.pdf
Checklist:
 2025.emnlp-main.743.checklist.pdf