Large language models achieve impressive results but distinguishing factual reasoning from hallucinations remains challenging. We propose a spectral analysis framework that models transformer layers as dynamic graphs induced by attention, with token embeddings as signals on these graphs. Through graph signal processing, we define diagnostics including Dirichlet energy, spectral entropy, and high-frequency energy ratios, with theoretical connections to computational stability. Experiments across GPT architectures suggest universal spectral patterns: factual statements exhibit consistent "energy mountain" behavior with low-frequency convergence, while different hallucination types show distinct signatures. Logical contradictions destabilize spectra with large effect sizes ($g>1.0$), semantic errors remain stable but show connectivity drift, and substitution hallucinations display intermediate perturbations. A simple detector using spectral signatures achieves 88.75% accuracy versus 75% for perplexity-based baselines, demonstrating practical utility. These findings indicate that spectral geometry may capture reasoning patterns and error behaviors, potentially offering a framework for hallucination detection in large language models.
翻译:大语言模型取得了令人瞩目的成果,但区分事实推理与幻觉仍然具有挑战性。我们提出了一种谱分析框架,将Transformer层建模为由注意力机制诱导的动态图,并将词元嵌入视为这些图上的信号。通过图信号处理,我们定义了包括狄利克雷能量、谱熵和高频能量比在内的诊断指标,并建立了其与计算稳定性的理论联系。在多种GPT架构上的实验表明存在普适的谱模式:事实陈述表现出具有低频收敛性的稳定"能量山"行为,而不同类型的幻觉则显示出不同的特征谱。逻辑矛盾会显著破坏谱稳定性(效应量 $g>1.0$),语义错误虽保持稳定但呈现连接性漂移,而替换幻觉则表现出中等程度的扰动。使用谱特征的简单检测器达到了88.75%的准确率,优于基于困惑度的基线方法(75%),证明了其实用价值。这些发现表明,谱几何可能捕捉到推理模式和错误行为,有望为大型语言模型的幻觉检测提供一个理论框架。