Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences. However, rather than merely generating designs that are natural, we often aim to optimize downstream reward functions while preserving the naturalness of these design spaces. Existing methods for achieving this goal often require ``differentiable'' proxy models (\textit{e.g.}, classifier guidance or DPS) or involve computationally expensive fine-tuning of diffusion models (\textit{e.g.}, classifier-free guidance, RL-based fine-tuning). In our work, we propose a new method to address these challenges. Our algorithm is an iterative sampling method that integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future, into the standard inference procedure of pre-trained diffusion models. Notably, our approach avoids fine-tuning generative models and eliminates the need to construct differentiable models. This enables us to (1) directly utilize non-differentiable features/reward feedback, commonly used in many scientific domains, and (2) apply our method to recent discrete diffusion models in a principled way. Finally, we demonstrate the effectiveness of our algorithm across several domains, including image generation, molecule generation, and DNA/RNA sequence generation. The code is available at \href{https://github.com/masa-ue/SVDD}{https://github.com/masa-ue/SVDD}.
翻译:扩散模型在捕捉图像、分子、DNA、RNA及蛋白质序列的自然设计空间方面表现出色。然而,我们的目标往往不仅是生成自然的设计,更希望在保持设计空间自然性的同时优化下游奖励函数。现有实现该目标的方法通常需要“可微分”的代理模型(例如分类器引导或DPS),或涉及计算成本高昂的扩散模型微调(例如无分类器引导、基于强化学习的微调)。本研究提出一种新方法以应对这些挑战。我们的算法是一种迭代采样方法,将软价值函数——该函数能前瞻性地评估中间噪声状态如何导向未来高奖励——集成到预训练扩散模型的标准推理流程中。值得注意的是,该方法无需微调生成模型,也无需构建可微分模型。这使得我们能够:(1)直接利用许多科学领域中常用的不可微分特征/奖励反馈;(2)以理论完备的方式将方法应用于新兴的离散扩散模型。最后,我们在图像生成、分子生成及DNA/RNA序列生成等多个领域验证了算法的有效性。代码发布于 \href{https://github.com/masa-ue/SVDD}{https://github.com/masa-ue/SVDD}。