Gregory Polyakov
2025
Interpretability Analysis of Arithmetic In-Context Learning in Large Language Models
Gregory Polyakov
|
Christian Hepting
|
Carsten Eickhoff
|
Seyed Ali Bahrainian
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) exhibit sophisticated behavior, notably solving arithmetic with only a few in-context examples (ICEs). Yet the computations that connect those examples to the answer remain opaque. We probe four open-weight LLMs, Pythia-12B, Llama-3.1-8B, MPT-7B, and OPT-6.7B, on basic arithmetic to illustrate how they process ICEs. Our study integrates activation patching, information-flow analysis, automatic circuit discovery, and the logit-lens perspective into a unified pipeline. Within this framework we isolate partial-sum representations in three-operand tasks, investigate their influence on final logits, and derive linear function vectors that characterize tasks and align with ICE-induced activations. Controlled ablations show that strict pattern consistency in the formatting of ICEs guides the models more strongly than the symbols chosen or even the factual correctness of the examples. By unifying four complementary interpretability tools, this work delivers one of the most comprehensive interpretability studies of LLM arithmetic to date, and the first on three-operand tasks. Our code is publicly available.
ToolReflection: Improving Large Language Models for Real-World API Calls with Self-Generated Data
Gregory Polyakov
|
Ilseyar Alimova
|
Dmitry Abulkhanov
|
Ivan Sedykh
|
Andrey Bout
|
Sergey Nikolenko
|
Irina Piontkovskaya
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)
While open-source large language models (LLMs) have advanced in leveraging third-party tools, significant challenges remain in real-world API usage, where behavior is unpredictable or poorly specified. Existing benchmarks often fail to capture this complexity. We propose ToolReflection, a novel method that improves LLMs’ ability to self-correct API calls by utilizing real-time API feedback. We also introduce new datasets specifically designed to test model performance under realistic conditions. In ToolReflection, models undergo instruction tuning on a dataset augmented with self-generated errors and corrections. Our evaluation across ToolAlpaca, ToolBench benchmarks, and three newly developed datasets (GPT4Tools-OOD, GPT4Tools-OOD-Hard, and Multistep-100) demonstrates its effectiveness. ToolReflection boosts overall success rates by 25.4% on GPT4Tools-OOD, 56.2% on GPT4Tools-OOD-Hard, and 4% on Multistep-100, outperforming original models. On ToolAlpaca, we show a 14% improvement in the “Simulated” setting and 10.5% in the “Real-world” scenario. Our error analysis highlights ToolReflection significantly enhances recovery from incorrect tool calls, even with incomplete or erroneous API documentation. We have released the code, prompts, and data at https://github.com/polgrisha/ToolReflection.
Search
Fix author
Co-authors
- Dmitry Abulkhanov 1
- Ilseyar Alimova 1
- Seyed Ali Bahrainian 1
- Andrey Bout 1
- Carsten Eickhoff 1
- show all...