As Diffusion Models have shown promising performance, a lot of efforts have been made to improve the controllability of Diffusion Models. However, how to train Diffusion Models to have the disentangled latent spaces and how to naturally incorporate the disentangled conditions during the sampling process have been underexplored. In this paper, we present a training framework for feature disentanglement of Diffusion Models (FDiff). We further propose two sampling methods that can boost the realism of our Diffusion Models and also enhance the controllability. Concisely, we train Diffusion Models conditioned on two latent features, a spatial content mask, and a flattened style embedding. We rely on the inductive bias of the denoising process of Diffusion Models to encode pose/layout information in the content feature and semantic/style information in the style feature. Regarding the sampling methods, we first generalize Composable Diffusion Models (GCDM) by breaking the conditional independence assumption to allow for some dependence between conditional inputs, which is shown to be effective in realistic generation in our experiments. Second, we propose timestep-dependent weight scheduling for content and style features to further improve the performance. We also observe better controllability of our proposed methods compared to existing methods in image manipulation and image translation.
翻译:随着扩散模型展现出优异的性能,大量研究致力于提升扩散模型的可控性。然而,如何训练扩散模型以获得解耦的潜在空间,以及如何在采样过程中自然地融入解耦条件,目前尚未得到充分探索。本文提出了一种用于扩散模型特征解耦的训练框架(FDiff)。我们进一步提出了两种采样方法,既能提升扩散模型的生成真实性,也能增强其可控性。具体而言,我们训练以两个潜在特征为条件的扩散模型:空间内容掩码和平展化的风格嵌入。我们利用扩散模型去噪过程的归纳偏置,将姿态/布局信息编码于内容特征中,将语义/风格信息编码于风格特征中。关于采样方法,我们首先推广了可组合扩散模型(GCDM),通过打破条件独立性假设,允许条件输入之间存在一定的依赖性,实验证明这能有效提升生成的真实性。其次,我们提出了针对内容与风格特征的时间步相关权重调度策略,以进一步提升性能。与现有方法相比,我们在图像编辑和图像转换任务中观察到所提方法具有更好的可控性。