Chain-of-Thought (CoT) has been shown to significantly improve the reasoning accuracy of large language models (LLMs) on complex tasks. However, due to the autoregressive, step-by-step generation paradigm, existing CoT methods suffer from two fundamental limitations. First, the reasoning process is highly sensitive to early decisions: once an initial error is introduced, it tends to propagate and amplify through subsequent steps, while the lack of a global coordination and revision mechanism makes such errors difficult to correct, ultimately leading to distorted reasoning chains. Second, current CoT approaches lack structured analysis techniques for filtering redundant reasoning and extracting key reasoning features, resulting in unstable reasoning processes and limited interpretability. To address these issues, we propose GHS-TDA. GHS-TDA first constructs a semantically enriched global hypothesis graph to aggregate, align, and coordinate multiple candidate reasoning paths, thereby providing alternative global correction routes when local reasoning fails. It then applies topological data analysis based on persistent homology to capture stable multi-scale structures, remove redundancy and inconsistencies, and extract a more reliable reasoning skeleton. By jointly leveraging reasoning diversity and topological stability, GHS-TDA achieves self-adaptive convergence, produces high-confidence and interpretable reasoning paths, and consistently outperforms strong baselines in terms of both accuracy and robustness across multiple reasoning benchmarks.
翻译:思维链(CoT)已被证明能显著提升大语言模型(LLM)在复杂任务上的推理准确性。然而,由于自回归的逐步生成范式,现有的CoT方法存在两个根本性局限。首先,推理过程对早期决策高度敏感:一旦引入初始错误,其倾向于在后续步骤中传播并放大,而缺乏全局协调与修正机制使得此类错误难以纠正,最终导致推理链扭曲。其次,当前的CoT方法缺乏用于过滤冗余推理和提取关键推理特征的结构化分析技术,导致推理过程不稳定且可解释性有限。为解决这些问题,我们提出了GHS-TDA。GHS-TDA首先构建一个语义增强的全局假设图,以聚合、对齐和协调多个候选推理路径,从而在局部推理失败时提供替代的全局修正路径。随后,它应用基于持续同调的拓扑数据分析来捕获稳定的多尺度结构,去除冗余与不一致性,并提取更可靠的推理骨架。通过联合利用推理多样性与拓扑稳定性,GHS-TDA实现了自适应收敛,生成高置信度且可解释的推理路径,并在多个推理基准测试中,在准确性和鲁棒性方面均持续优于强基线模型。