Diffusion models trained on different, non-overlapping subsets of a dataset often produce strikingly similar outputs when given the same noise seed. We trace this consistency to a simple linear effect: the shared Gaussian statistics across splits already predict much of the generated images. To formalize this, we develop a random matrix theory (RMT) framework that quantifies how finite datasets shape the expectation and variance of the learned denoiser and sampling map in the linear setting. For expectations, sampling variability acts as a renormalization of the noise level through a self-consistent relation $σ^2 \mapsto κ(σ^2)$, explaining why limited data overshrink low-variance directions and pull samples toward the dataset mean. For fluctuations, our variance formulas reveal three key factors behind cross-split disagreement: \textit{anisotropy} across eigenmodes, \textit{inhomogeneity} across inputs, and overall scaling with dataset size. Extending deterministic-equivalence tools to fractional matrix powers further allows us to analyze entire sampling trajectories. The theory sharply predicts the behavior of linear diffusion models, and we validate its predictions on UNet and DiT architectures in their non-memorization regime, identifying where and how samples deviates across training data split. This provides a principled baseline for reproducibility in diffusion training, linking spectral properties of data to the stability of generative outputs.
翻译:在不同、非重叠的数据集子集上训练的扩散模型,当给定相同噪声种子时,常产生惊人相似的输出。我们将这种一致性追溯至一个简单的线性效应:各数据划分间共享的高斯统计特性已能预测生成图像的大部分内容。为形式化这一现象,我们建立了一个随机矩阵理论(RMT)框架,用以量化有限数据集如何在线性设定下影响学习到的去噪器与采样映射的期望和方差。对于期望,采样变异性通过自洽关系 $σ^2 \mapsto κ(σ^2)$ 表现为噪声水平的重正化,这解释了有限数据为何会过度收缩低方差方向并将样本拉向数据集均值。对于波动,我们的方差公式揭示了跨数据划分不一致性背后的三个关键因素:特征模间的\textit{各向异性}、输入间的\textit{非均匀性},以及随数据集规模的整体缩放。将确定性等价工具扩展至分数矩阵幂次,进一步使我们能够分析完整的采样轨迹。该理论精确预测了线性扩散模型的行为,并在UNet和DiT架构的非记忆化区域验证了其预测,识别出样本在何处以及如何随训练数据划分产生偏差。这为扩散训练的可复现性提供了原理性基线,将数据的光谱特性与生成输出的稳定性联系起来。