In this work, we study the generalizability of diffusion models by looking into the hidden properties of the learned score functions, which are essentially a series of deep denoisers trained on various noise levels. We observe that as diffusion models transition from memorization to generalization, their corresponding nonlinear diffusion denoisers exhibit increasing linearity. This discovery leads us to investigate the linear counterparts of the nonlinear diffusion models, which are a series of linear models trained to match the function mappings of the nonlinear diffusion denoisers. Surprisingly, these linear denoisers are approximately the optimal denoisers for a multivariate Gaussian distribution characterized by the empirical mean and covariance of the training dataset. This finding implies that diffusion models have the inductive bias towards capturing and utilizing the Gaussian structure (covariance information) of the training dataset for data generation. We empirically demonstrate that this inductive bias is a unique property of diffusion models in the generalization regime, which becomes increasingly evident when the model's capacity is relatively small compared to the training dataset size. In the case that the model is highly overparameterized, this inductive bias emerges during the initial training phases before the model fully memorizes its training data. Our study provides crucial insights into understanding the notable strong generalization phenomenon recently observed in real-world diffusion models.
翻译:在本研究中,我们通过探究已学习得分函数的隐含特性来研究扩散模型的泛化能力,这些得分函数本质上是在不同噪声水平上训练的一系列深度去噪器。我们观察到,当扩散模型从记忆阶段过渡到泛化阶段时,其对应的非线性扩散去噪器呈现出逐渐增强的线性特征。这一发现促使我们研究非线性扩散模型的线性对应物——这些线性模型经过训练以匹配非线性扩散去噪器的函数映射。令人惊讶的是,这些线性去噪器近似于针对以训练数据集经验均值和协方差为特征的多变量高斯分布的最优去噪器。这一发现意味着扩散模型具有归纳偏置,能够捕获并利用训练数据集的高斯结构(协方差信息)进行数据生成。我们通过实证证明,这种归纳偏置是扩散模型在泛化机制中的独特属性,当模型容量相对于训练数据集规模较小时,该特性会愈发明显。在模型高度过参数化的情况下,这种归纳偏置会在模型完全记忆训练数据之前的初始训练阶段出现。我们的研究为理解近期在实际扩散模型中观察到的显著强泛化现象提供了关键见解。