Diffusion generative models transform noise into data by inverting a process that progressively adds noise to data samples. Inspired by concepts from the renormalization group in physics, which analyzes systems across different scales, we revisit diffusion models by exploring three key design aspects: 1) the choice of representation in which the diffusion process operates (e.g. pixel-, PCA-, Fourier-, or wavelet-basis), 2) the prior distribution that data is transformed into during diffusion (e.g. Gaussian with covariance $\Sigma$), and 3) the scheduling of noise levels applied separately to different parts of the data, captured by a component-wise noise schedule. Incorporating the flexibility in these choices, we develop a unified framework for diffusion generative models with greatly enhanced design freedom. In particular, we introduce soft-conditioning models that smoothly interpolate between standard diffusion models and autoregressive models (in any basis), conceptually bridging these two approaches. Our framework opens up a wide design space which may lead to more efficient training and data generation, and paves the way to novel architectures integrating different generative approaches and generation tasks.
翻译:扩散生成模型通过逆转一个逐步向数据样本添加噪声的过程,将噪声转化为数据。受物理学中重正化群(分析不同尺度下系统行为)概念的启发,我们通过探索三个关键设计维度重新审视扩散模型:1)扩散过程所操作的表示空间选择(例如像素基、PCA基、傅里叶基或小波基);2)扩散过程中数据被转换成的先验分布(例如协方差为$\Sigma$的高斯分布);3)针对数据不同部分分别施加的噪声水平调度策略,即分量式噪声调度方案。通过整合这些选择维度的灵活性,我们建立了一个具有极大增强设计自由度的扩散生成模型统一框架。特别地,我们提出了软条件模型,该模型能够在标准扩散模型与自回归模型(在任何基下)之间实现平滑插值,从概念上 bridging 了这两种方法。我们的框架开辟了广阔的设计空间,可能带来更高效的训练与数据生成,并为整合不同生成方法与生成任务的新型架构铺平道路。