Mechanistic Fine-tuning for In-context Learning

Hakaze Cho, Peng Luo, Mariko Kato, Rin Kaenbyou, Naoya Inoue


Abstract
In-context Learning (ICL) utilizes structured demonstration-query inputs to induce few-shot learning on Language Models (LMs), which are not originally pre-trained on ICL-style data. To bridge the gap between ICL and pre-training, some approaches fine-tune LMs on large ICL-style datasets by an end-to-end paradigm with massive computational costs. To reduce such costs, in this paper, we propose Attention Behavior Fine-Tuning (ABFT), utilizing the previous findings on the inner mechanism of ICL, building training objectives on the attention scores instead of the final outputs, to force the attention scores to focus on the correct label tokens presented in the context and mitigate attention scores from the wrong label tokens. Our experiments on 9 modern LMs and 8 datasets empirically find that ABFT outperforms in performance, robustness, unbiasedness, and efficiency, with only around 0.01% data cost compared to the previous methods. Moreover, our subsequent analysis finds that the end-to-end training objective contains the ABFT objective, suggesting the implicit bias of ICL-style data to the emergence of induction heads. Our work demonstrates the possibility of controlling specific module sequences within LMs to improve their behavior, opening up the future application of mechanistic interpretability.
Anthology ID:
2025.blackboxnlp-1.21
Volume:
Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Yonatan Belinkov, Aaron Mueller, Najoung Kim, Hosein Mohebbi, Hanjie Chen, Dana Arad, Gabriele Sarti
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
330–357
Language:
URL:
https://aclanthology.org/2025.blackboxnlp-1.21/
DOI:
Bibkey:
Cite (ACL):
Hakaze Cho, Peng Luo, Mariko Kato, Rin Kaenbyou, and Naoya Inoue. 2025. Mechanistic Fine-tuning for In-context Learning. In Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 330–357, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Mechanistic Fine-tuning for In-context Learning (Cho et al., BlackboxNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.blackboxnlp-1.21.pdf