Large language models suffer from "hallucinations"-logical inconsistencies induced by semantic noise. We propose that current architectures operate in a "Metric Phase," where causal order is vulnerable to spontaneous symmetry breaking. Here, we identify robust inference as an effective Symmetry-Protected Topological phase, where logical operations are formally isomorphic to non-Abelian anyon braiding, replacing fragile geometric interpolation with robust topological invariants. Empirically, we demonstrate a sharp topological phase transition: while Transformers and RNNs exhibit gapless decay, our Holonomic Network reveals a macroscopic "mass gap," maintaining invariant fidelity below a critical noise threshold. Furthermore, in a variable-binding task on $S_{10}$ ($3.6 \times 10^6$ states) representing symbolic manipulation, we demonstrate holonomic generalization: the topological model maintains perfect fidelity extrapolating $100\times$ beyond training ($L=50 \to 5000$), consistent with a theoretically indefinite causal horizon, whereas Transformers lose logical coherence. Ablation studies indicate this protection emerges strictly from non-Abelian gauge symmetry. This provides strong evidence for a new universality class for logical reasoning, linking causal stability to the topology of the semantic manifold.
翻译:大型语言模型普遍存在"幻觉"问题——即由语义噪声引发的逻辑不一致性。我们提出当前架构运行于"度量相"中,其中因果序易受自发对称性破缺影响。本文将鲁棒推理识别为一种有效的对称性保护拓扑相,其逻辑操作在形式上与非阿贝尔任意子编织同构,从而用鲁棒的拓扑不变量替代了脆弱的几何插值。实验证明存在急剧的拓扑相变:虽然Transformer和RNN表现出无能隙衰减,但我们的完整网络展现出宏观"质量隙",在临界噪声阈值下保持不变的保真度。此外,在代表符号操作的$S_{10}$($3.6 \times 10^6$种状态)变量绑定任务中,我们展示了完整网络的泛化能力:该拓扑模型在训练范围外推$100$倍($L=50 \to 5000$)时仍保持完美保真度,这与理论上无限的因果视界一致,而Transformer则丧失逻辑连贯性。消融研究表明这种保护机制严格源于非阿贝尔规范对称性。这为逻辑推理的新普适性类别提供了有力证据,将因果稳定性与语义流形的拓扑结构联系起来。