Generative diffusion processes are state-of-the-art machine learning models deeply connected with fundamental concepts in statistical physics. Depending on the dataset size and the capacity of the network, their behavior is known to transition from an associative memory regime to a generalization phase in a phenomenon that has been described as a glassy phase transition. Here, using statistical physics techniques, we extend the theory of memorization in generative diffusion to manifold-supported data. Our theoretical and experimental findings indicate that different tangent subspaces are lost due to memorization effects at different critical times and dataset sizes, which depend on the local variance of the data along their directions. Perhaps counterintuitively, we find that, under some conditions, subspaces of higher variance are lost first due to memorization effects. This leads to a selective loss of dimensionality where some prominent features of the data are memorized without a full collapse on any individual training point. We validate our theory with a comprehensive set of experiments on networks trained both in image datasets and on linear manifolds, which result in a remarkable qualitative agreement with the theoretical predictions.
翻译:生成扩散过程是最先进的机器学习模型,与统计物理学中的基本概念有着深刻联系。根据数据集规模和网络容量,其行为已知会从关联记忆机制转变为泛化阶段,这一现象被描述为玻璃相变。本文运用统计物理学方法,将生成扩散中的记忆理论拓展到流形支撑数据。我们的理论与实验结果表明,由于记忆效应的影响,不同切向子空间会在不同的临界时间和数据集规模下丢失,这些临界值取决于数据沿各方向的局部方差。可能反直觉的是,我们发现某些条件下较高方差的子空间会因记忆效应而首先丢失。这导致选择性的维度丢失,使得数据的某些显著特征被记忆,而不会完全坍缩到任何单个训练点上。我们通过在图像数据集和线性流形上训练的网络进行了全面实验验证,实验结果与理论预测达成了显著的一致性。