Large language models (LLMs) are increasingly used for long-document question answering, where reliable attribution to sources is critical for trust. Existing post-hoc attribution methods work well for extractive QA but struggle in multi-hop, abstractive, and semi-extractive settings, where answers synthesize information across passages. To address these challenges, we argue that post-hoc attribution can be reframed as a reasoning problem, where answers are decomposed into constituent units, each tied to specific context. We first show that prompting models to generate such decompositions alongside attributions improves performance. Building on this, we introduce DecompTune, a post-training method that teaches models to produce answer decompositions as intermediate reasoning steps. We curate a diverse dataset of complex QA tasks, annotated with decompositions by a strong LLM, and post-train Qwen-2.5 (7B and 14B) using a two-stage SFT + GRPO pipeline with task-specific curated rewards. Across extensive experiments and ablations, DecompTune substantially improves attribution quality, outperforming prior methods and matching or exceeding state-of-the-art frontier models.
翻译:大型语言模型(LLMs)越来越多地应用于长文档问答任务,其中对来源的可靠归因对于建立信任至关重要。现有的后验归因方法在抽取式问答中表现良好,但在多跳推理、抽象式以及半抽取式场景中面临挑战,这些场景下的答案需要综合多个段落的信息。为解决这些问题,我们认为后验归因可被重新定义为推理问题,即将答案分解为基本单元,每个单元与特定上下文关联。我们首先证明,通过提示模型在生成归因时同时生成此类分解,能够提升性能。基于此,我们提出了DecompTune,一种后训练方法,旨在教导模型将答案分解作为中间推理步骤生成。我们构建了一个多样化的复杂问答任务数据集,通过一个强大的LLM进行分解标注,并使用两阶段的SFT + GRPO流程(配合任务特定的定制奖励)对Qwen-2.5(7B和14B)模型进行后训练。在大量实验和消融研究中,DecompTune显著提升了归因质量,超越了先前方法,并达到或超过了前沿模型的最优水平。