Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming, and there exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models. To close this gap, inspired by the recent efforts of learning EBMs by maximizing diffusion recovery likelihood (DRL), we propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs defined on increasingly noisy versions of a dataset, paired with an initializer model for each EBM. At each noise level, the two models are jointly estimated within a cooperative training framework: samples from the initializer serve as starting points that are refined by a few MCMC sampling steps from the EBM. The EBM is then optimized by maximizing recovery likelihood, while the initializer model is optimized by learning from the difference between the refined samples and the initial samples. In addition, we made several practical designs for EBM training to further improve the sample quality. Combining these advances, our approach significantly boost the generation performance compared to existing EBM methods on CIFAR-10 and ImageNet datasets. We also demonstrate the effectiveness of our models for several downstream tasks, including classifier-free guided generation, compositional generation, image inpainting and out-of-distribution detection.
翻译:在高维数据上训练能量基模型(EBMs)既具挑战性又耗时,且EBMs与GANs、扩散模型等其他生成框架在样本质量上存在显著差距。为缩小这一差距,受近期通过最大化扩散恢复似然(DRL)学习EBMs的研究启发,我们提出协同扩散恢复似然(CDRL),这是一种有效方法,可基于数据集的逐步噪声化版本定义的一系列EBMs进行可处理的学习与采样,并为每个EBM配备一个初始化模型。在每个噪声水平上,两个模型在协同训练框架内进行联合估计:来自初始化模型的样本作为起点,通过EBM的若干步MCMC采样进行优化。EBM通过最大化恢复似然进行优化,而初始化模型则通过学习优化后样本与初始样本之间的差异进行优化。此外,我们为EBM训练设计了若干实用方案,以进一步提升样本质量。结合这些改进,我们的方法在CIFAR-10和ImageNet数据集上,相比现有EBM方法显著提升了生成性能。我们还展示了所提模型在多个下游任务中的有效性,包括无分类器引导生成、组合生成、图像修复以及分布外检测。