Classifier-guided diffusion models generate conditional samples by augmenting the reverse-time score with the gradient of a learned classifier, yet it remains unclear whether standard classifier training procedures yield effective diffusion guidance. We address this gap by showing that, under mild smoothness assumptions on the classifiers, controlling the cross-entropy error at each diffusion step also controls the error of the resulting guidance vectors: classifiers achieving conditional KL divergence $\varepsilon^2$ from the ground-truth conditional label probabilities induce guidance vectors with mean squared error $\widetilde{O}(d \varepsilon )$. Our result yields an upper bound on the sampling error under classifier guidance and bears resemblance to a reverse log-Sobolev-type inequality. Moreover, we show that the classifier smoothness assumption is essential, by constructing simple counterexamples demonstrating that, without it, control of the guidance vector can fail for almost all distributions. To our knowledge, our work establishes the first quantitative link between classifier training and guidance alignment, yielding both a theoretical foundation for classifier guidance and principled guidelines for classifier selection.
翻译:分类器引导的扩散模型通过将反向时间分数与学习分类器的梯度相结合来生成条件样本,然而标准分类器训练程序是否能产生有效的扩散引导仍不明确。我们通过证明在分类器满足温和平滑性假设的条件下,控制每个扩散步骤的交叉熵误差同样能控制所得引导向量的误差,从而填补了这一空白:实现与真实条件标签概率条件KL散度为$\varepsilon^2$的分类器,可产生均方误差为$\widetilde{O}(d \varepsilon )$的引导向量。我们的结果给出了分类器引导下采样误差的上界,其形式类似于反向对数索伯列夫型不等式。此外,我们通过构造简单反例证明分类器平滑性假设具有本质重要性:若缺乏该假设,对于几乎所有分布,引导向量的控制均可能失效。据我们所知,本研究首次建立了分类器训练与引导对齐之间的定量联系,既为分类器引导提供了理论基础,也为分类器选择提供了原则性指导。