Recent advancements in the realm of deep generative models focus on generating samples that satisfy multiple desired properties. However, prevalent approaches optimize these property functions independently, thus omitting the trade-offs among them. In addition, the property optimization is often improperly integrated into the generative models, resulting in an unnecessary compromise on generation quality (i.e., the quality of generated samples). To address these issues, we formulate a constrained optimization problem. It seeks to optimize generation quality while ensuring that generated samples reside at the Pareto front of multiple property objectives. Such a formulation enables the generation of samples that cannot be further improved simultaneously on the conflicting property functions and preserves good quality of generated samples. Building upon this formulation, we introduce the PaRetO-gUided Diffusion model (PROUD), wherein the gradients in the denoising process are dynamically adjusted to enhance generation quality while the generated samples adhere to Pareto optimality. Experimental evaluations on image generation and protein generation tasks demonstrate that our PROUD consistently maintains superior generation quality while approaching Pareto optimality across multiple property functions compared to various baselines.
翻译:近年来,深度生成模型领域的研究进展聚焦于生成满足多种期望属性的样本。然而,主流方法通常独立优化这些属性函数,从而忽略了它们之间的权衡关系。此外,属性优化往往未能恰当地整合到生成模型中,导致生成质量(即生成样本的质量)受到不必要的折损。为解决这些问题,我们构建了一个约束优化问题。该问题旨在优化生成质量,同时确保生成的样本位于多个属性目标的帕累托前沿。这一形式化使得生成的样本在相互冲突的属性函数上无法被同时进一步改进,并保持了生成样本的良好质量。基于此形式化,我们提出了帕累托引导扩散模型(PROUD),其中去噪过程中的梯度被动态调整,以在生成样本遵循帕累托最优性的同时提升生成质量。在图像生成和蛋白质生成任务上的实验评估表明,与多种基线方法相比,我们的PROUD在逼近多个属性函数帕累托最优性的同时,始终能保持更优的生成质量。