Offline model-based optimization (MBO) aims to maximize a black-box objective function using only an offline dataset of designs and scores. A prevalent approach involves training a conditional generative model on existing designs and their associated scores, followed by the generation of new designs conditioned on higher target scores. However, these newly generated designs often underperform due to the lack of high-scoring training data. To address this challenge, we introduce a novel method, Design Editing for Offline Model-based Optimization (DEMO), which consists of two phases. In the first phase, termed pseudo-target distribution generation, we apply gradient ascent on the offline dataset using a trained surrogate model, producing a synthetic dataset where the predicted scores serve as new labels. A conditional diffusion model is subsequently trained on this synthetic dataset to capture a pseudo-target distribution, which enhances the accuracy of the conditional diffusion model in generating higher-scoring designs. Nevertheless, the pseudo-target distribution is susceptible to noise stemming from inaccuracies in the surrogate model, consequently predisposing the conditional diffusion model to generate suboptimal designs. We hence propose the second phase, existing design editing, to directly incorporate the high-scoring features from the offline dataset into design generation. In this phase, top designs from the offline dataset are edited by introducing noise, which are subsequently refined using the conditional diffusion model to produce high-scoring designs. Overall, high-scoring designs begin with inheriting high-scoring features from the second phase and are further refined with a more accurate conditional diffusion model in the first phase. Empirical evaluations on 7 offline MBO tasks show that DEMO outperforms various baseline methods.
翻译:离线模型优化旨在仅利用设计及其评分的离线数据集来最大化黑盒目标函数。一种主流方法是在现有设计及其对应评分上训练条件生成模型,随后生成以更高目标评分为条件的新设计。然而,由于缺乏高评分训练数据,这些新生成的设计往往表现不佳。为解决这一挑战,我们提出了一种新方法——基于离线模型的设计编辑优化,该方法包含两个阶段。第一阶段称为伪目标分布生成,我们使用训练好的代理模型对离线数据集执行梯度上升,生成以预测评分作为新标签的合成数据集。随后在该合成数据集上训练条件扩散模型以捕获伪目标分布,从而提升条件扩散模型生成更高评分设计的准确性。然而,伪目标分布易受代理模型不准确性产生的噪声影响,导致条件扩散模型倾向于生成次优设计。因此我们提出第二阶段——现有设计编辑,以将离线数据集中的高评分特征直接融入设计生成过程。在此阶段,通过对离线数据集中的顶级设计添加噪声进行编辑,随后利用条件扩散模型进行细化以生成高评分设计。总体而言,高评分设计首先继承第二阶段的高评分特征,再通过第一阶段更精确的条件扩散模型进一步优化。在7项离线模型优化任务上的实证评估表明,该方法优于多种基线方法。