Diffusion models have demonstrated remarkable performance in generating high-dimensional samples across domains such as vision, language, and the sciences. Although continuous-state diffusion models have been extensively studied both empirically and theoretically, discrete-state diffusion models, essential for applications involving text, sequences, and combinatorial structures, they remain significantly less understood from a theoretical standpoint. In particular, all existing analyses of discrete-state models assume access to an empirical risk minimizer. In this work, we present a principled theoretical framework analyzing diffusion models, providing a state-of-the-art sample complexity bound of $\widetilde{\mathcal{O}}(\epsilon^{-4})$. Our structured decomposition of the score estimation error into statistical and optimization components offers critical insights into how diffusion models can be trained efficiently. This analysis addresses a fundamental gap in the literature and establishes the theoretical tractability and practical relevance of diffusion models.
翻译:扩散模型在生成高维样本方面表现出卓越性能,其应用涵盖视觉、语言和科学等多个领域。尽管连续状态扩散模型已在实证和理论上得到广泛研究,但对于涉及文本、序列和组合结构应用至关重要的离散状态扩散模型,其理论理解仍显著不足。特别是,现有所有离散状态模型分析均假设可访问经验风险最小化器。本研究提出一个分析扩散模型的原则性理论框架,提供了$\widetilde{\mathcal{O}}(\epsilon^{-4})$的先进样本复杂度界限。通过将得分估计误差结构化分解为统计分量和优化分量,我们的研究为扩散模型的高效训练机制提供了关键见解。该分析弥补了文献中的根本性空白,确立了扩散模型的理论可处理性与实践相关性。