Decomposition-Enhanced Training for Post-Hoc Attributions in Language Models

Sriram Balasubramanian, Samyadeep Basu, Koustava Goswami, Ryan A. Rossi, Varun Manjunatha, Roshan Santhosh, Ruiyi Zhang, Soheil Feizi, Nedim Lipka


Abstract
Large language models (LLMs) are increasingly used for long-document question answering, where reliable attribution to sources is critical for trust. Existing post-hoc attribution methods work well for extractive QA but struggle in multi-hop, abstractive, and semi-extractive settings, where answers synthesize information across passages. To address these challenges, we argue that post-hoc attribution can be reframed as a reasoning problem, where answers are decomposed into constituent units, each tied to specific context. We first show that prompting models to generate such decompositions alongside attributions improves performance. Building on this, we introduce DecompTune, a post-training method that teaches models to produce answer decompositions as intermediate reasoning steps. We curate a diverse dataset of complex QA tasks, annotated with decompositions by a strong LLM, and post-train Qwen-2.5 (7B and 14B) using a two-stage SFT + GRPO pipeline with task-specific curated rewards. Across extensive experiments and ablations, DecompTune substantially improves attribution quality, outperforming prior methods and matching or exceeding state-of-the-art frontier models.
Anthology ID:
2026.eacl-long.236
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5070–5084
Language:
URL:
https://aclanthology.org/2026.eacl-long.236/
DOI:
Bibkey:
Cite (ACL):
Sriram Balasubramanian, Samyadeep Basu, Koustava Goswami, Ryan A. Rossi, Varun Manjunatha, Roshan Santhosh, Ruiyi Zhang, Soheil Feizi, and Nedim Lipka. 2026. Decomposition-Enhanced Training for Post-Hoc Attributions in Language Models. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5070–5084, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Decomposition-Enhanced Training for Post-Hoc Attributions in Language Models (Balasubramanian et al., EACL 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.eacl-long.236.pdf
Checklist:
 2026.eacl-long.236.checklist.pdf