Compounding errors pose a significant challenge in automatic literature review generation, as inaccuracies can cascade across multi-stage retrieval and generation workflows. Existing self-correction strategies often lack mechanisms to effectively track and consolidate verified information throughout the process, making it difficult to prevent error accumulation and propagation. In this paper, we propose Structure-Guided Memory Consolidation (SGMC), a novel framework that incrementally consolidates and verifies information using structured representations at each stage of the literature review pipeline. SGMC consists of three key modules: Tree-Guided Memory for hierarchical literature retrieval and outline generation, Hub-Guided Memory for evidence extraction and iterative content refinement, and Self-Loop Memory for proactive error correction via historical feedback. Extensive experiments on public benchmarks and a newly constructed large-scale dataset demonstrate that SGMC achieves state-of-the-art performance in citation accuracy and content quality, significantly mitigating compounding errors in long-form literature review generation.
翻译:复合误差在自动文献综述生成中构成重大挑战,因为不准确性会在多阶段检索与生成工作流中产生级联效应。现有自校正策略通常缺乏在整个过程中有效追踪和整合已验证信息的机制,导致难以阻止误差累积与传播。本文提出结构引导的记忆整合框架,该创新方法通过结构化表示在文献综述流程的每个阶段逐步整合与验证信息。该框架包含三个核心模块:用于层级化文献检索与提纲生成的树状引导记忆模块,用于证据提取与迭代内容优化的枢纽引导记忆模块,以及通过历史反馈实现主动误差校正的自循环记忆模块。在公开基准和新构建的大规模数据集上的广泛实验表明,该框架在引文准确性和内容质量方面达到最先进水平,显著缓解了长篇幅文献综述生成中的复合误差问题。