We investigate Gaussian Universality for data distributions generated via diffusion models. By Gaussian Universality we mean that the test error of a generalized linear model $f(\mathbf{W})$ trained for a classification task on the diffusion data matches the test error of $f(\mathbf{W})$ trained on the Gaussian Mixture with matching means and covariances per class.In other words, the test error depends only on the first and second order statistics of the diffusion-generated data in the linear setting. As a corollary, the analysis of the test error for linear classifiers can be reduced to Gaussian data from diffusion-generated data. Analysing the performance of models trained on synthetic data is a pertinent problem due to the surge of methods such as \cite{sehwag2024stretchingdollardiffusiontraining}. Moreover, we show that, for any $1$- Lipschitz scalar function $\phi$, $\phi(\mathbf{x})$ is close to $\mathbb{E} \phi(\mathbf{x})$ with high probability for $\mathbf{x}$ sampled from the conditional diffusion model corresponding to each class. Finally, we note that current approaches for proving universality do not apply to diffusion-generated data as the covariance matrices of the data tend to have vanishing minimum singular values, contrary to the assumption made in the literature. This leaves extending previous mathematical universality results as an intriguing open question.
翻译:我们研究了通过扩散模型生成的数据分布的高斯普适性。高斯普适性是指,在扩散数据上为分类任务训练的广义线性模型 $f(\mathbf{W})$ 的测试误差,与在具有匹配的类均值和协方差的高斯混合模型上训练的 $f(\mathbf{W})$ 的测试误差相匹配。换言之,在线性设定下,测试误差仅取决于扩散生成数据的一阶和二阶统计量。作为推论,线性分类器测试误差的分析可以从扩散生成数据简化为高斯数据。由于诸如 \cite{sehwag2024stretchingdollardiffusiontraining} 等方法的大量涌现,分析在合成数据上训练的模型性能成为一个相关问题。此外,我们证明,对于任意 $1$-Lipschitz 标量函数 $\phi$,从对应于每个类别的条件扩散模型中采样的 $\mathbf{x}$,其 $\phi(\mathbf{x})$ 以高概率接近 $\mathbb{E} \phi(\mathbf{x})$。最后,我们注意到,当前证明普适性的方法不适用于扩散生成的数据,因为数据的协方差矩阵往往具有趋于零的最小奇异值,这与文献中的假设相反。这使得扩展先前的数学普适性结果成为一个引人深思的开放性问题。