Score-based diffusion models have achieved incredible performance in generating realistic images, audio, and video data. While these models produce high-quality samples with impressive details, they often introduce unrealistic artifacts, such as distorted fingers or hallucinated texts with no meaning. This paper focuses on textual hallucinations, where diffusion models correctly generate individual symbols but assemble them in a nonsensical manner. Through experimental probing, we consistently observe that such phenomenon is attributed it to the network's local generation bias. Denoising networks tend to produce outputs that rely heavily on highly correlated local regions, particularly when different dimensions of the data distribution are nearly pairwise independent. This behavior leads to a generation process that decomposes the global distribution into separate, independent distributions for each symbol, ultimately failing to capture the global structure, including underlying grammar. Intriguingly, this bias persists across various denoising network architectures including MLP and transformers which have the structure to model global dependency. These findings also provide insights into understanding other types of hallucinations, extending beyond text, as a result of implicit biases in the denoising models. Additionally, we theoretically analyze the training dynamics for a specific case involving a two-layer MLP learning parity points on a hypercube, offering an explanation of its underlying mechanism.
翻译:基于分数的扩散模型在生成逼真的图像、音频和视频数据方面取得了令人瞩目的性能。尽管这些模型能够生成细节丰富的高质量样本,但它们常常会引入不真实的伪影,例如扭曲的手指或无意义的幻觉文本。本文聚焦于文本幻觉现象,即扩散模型能够正确生成单个符号,却以无意义的方式将其组合。通过实验探究,我们一致观察到此类现象可归因于网络的局部生成偏差。去噪网络倾向于严重依赖高度相关的局部区域来生成输出,尤其是在数据分布的不同维度近乎两两独立的情况下。这种行为导致生成过程将全局分布分解为每个符号各自独立的分布,最终无法捕捉包括底层语法在内的全局结构。有趣的是,这种偏差在不同去噪网络架构(包括具备建模全局依赖能力的MLP和Transformer)中持续存在。这些发现也为理解去噪模型中隐含偏差导致的其他类型幻觉(超越文本范畴)提供了见解。此外,我们通过理论分析研究了特定案例(涉及在超立方体上学习奇偶点的两层MLP)的训练动态,从而对其内在机制提供解释。