Do language models practice what they preach? Examining language ideologies about gendered language reform encoded in LLMs

Julia Watson, Sophia S. Lee, Barend Beekhuizen, Suzanne Stevenson


Abstract
We study language ideologies in text produced by LLMs through a case study on English gendered language reform (related to role nouns like congressperson/-woman/-man, and singular they). First, we find political bias: when asked to use language that is “correct” or “natural”, LLMs use language most similarly to when asked to align with conservative (vs. progressive) values. This shows how LLMs’ metalinguistic preferences can implicitly communicate the language ideologies of a particular political group, even in seemingly non-political contexts. Second, we find LLMs exhibit internal inconsistency: LLMs use gender-neutral variants more often when more explicit metalinguistic context is provided. This shows how the language ideologies expressed in text produced by LLMs can vary, which may be unexpected to users. We discuss the broader implications of these findings for value alignment.
Anthology ID:
2025.coling-main.80
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1201–1223
Language:
URL:
https://aclanthology.org/2025.coling-main.80/
DOI:
Bibkey:
Cite (ACL):
Julia Watson, Sophia S. Lee, Barend Beekhuizen, and Suzanne Stevenson. 2025. Do language models practice what they preach? Examining language ideologies about gendered language reform encoded in LLMs. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1201–1223, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Do language models practice what they preach? Examining language ideologies about gendered language reform encoded in LLMs (Watson et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.80.pdf