Linear Steerability in Language Models: When It Emerges and How It Evolves

Jianshu She, Xinyue Li, Eric P. Xing, Zhengzhong Liu, Qirong Ho


Abstract
Language models can be steered by modifying their internal representations to control concepts such as emotion, style, or truthfulness in generation. However, the conditions for an effective intervention remain unclear and are often validated through heuristics and trial-and-error. To fill this gap, we demonstrate that intervention efficacy, measured by linear steerability (i.e., the ability to adjust output via linear transformations of hidden states), emerges during intermediate stages of training. Moreover, even closely related concepts (e.g., anger and sadness) exhibit steerability emergence at distinct stages of training*.To better interpret the dynamics of steerability during training, we adapt existing intervention techniques into a unified framework, referred to as the “Intervention Detector” (ID), which is designed to reveal how linear steerability evolves over the course of training through hidden state and representation analysis. ID reveals that concepts become increasingly linearly separable in the hidden space as training progresses, which strongly correlates with the emergence of linear steerability. We further introduce ID-based metrics, such as heatmaps, entropy trends, and cosine similarity, to help interpret how linear steerability evolves throughout training. In addition, we apply ID across different model families to ensure the generality of our findings on steerability dynamics.
Anthology ID:
2025.findings-emnlp.969
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17821–17846
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.969/
DOI:
Bibkey:
Cite (ACL):
Jianshu She, Xinyue Li, Eric P. Xing, Zhengzhong Liu, and Qirong Ho. 2025. Linear Steerability in Language Models: When It Emerges and How It Evolves. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 17821–17846, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Linear Steerability in Language Models: When It Emerges and How It Evolves (She et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.969.pdf
Checklist:
 2025.findings-emnlp.969.checklist.pdf