Improving LLM Reasoning through Interpretable Role-Playing Steering

Anyi Wang, Dong Shu, Yifan Wang, Yunpu Ma, Mengnan Du


Abstract
Role-playing has emerged as an effective technique for enhancing the reasoning capabilities of large language models (LLMs). However, existing methods primarily rely on prompt engineering, which often lacks stability and interpretability. In this paper, we introduce Sparse Autoencoder Role-Playing Steering (SRPS), a novel framework that identifies and manipulates internal model features associated with role-playing behavior. Our approach extracts latent representations from role-play prompts, selects the most relevant features based on activation patterns, and constructs a steering vector that can be injected into the model’s residual stream with controllable intensity. Our method enables fine-grained control over role-specific behavior and offers insights into how role information influences internal model activations. Extensive experiments across various reasoning benchmarks and model sizes demonstrate consistent performance gains. Notably, in the zero-shot chain-of-thought (CoT) setting, the accuracy of Llama3.1-8B on CSQA improves from 31.86% to 39.80%, while Gemma2-9B on SVAMP increases from 37.50% to 45.10%. These results highlight the potential of SRPS to enhance reasoning ability in LLMs, providing better interpretability and stability compared to traditional prompt-based role-playing.
Anthology ID:
2025.findings-emnlp.39
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
731–751
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.39/
DOI:
Bibkey:
Cite (ACL):
Anyi Wang, Dong Shu, Yifan Wang, Yunpu Ma, and Mengnan Du. 2025. Improving LLM Reasoning through Interpretable Role-Playing Steering. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 731–751, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Improving LLM Reasoning through Interpretable Role-Playing Steering (Wang et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.39.pdf
Checklist:
 2025.findings-emnlp.39.checklist.pdf