We introduce an approach to bias deep generative models, such as GANs and diffusion models, towards generating data with either enhanced fidelity or increased diversity. Our approach involves manipulating the distribution of training and generated data through a novel metric for individual samples, named pseudo density, which is based on the nearest-neighbor information from real samples. Our approach offers three distinct techniques to adjust the fidelity and diversity of deep generative models: 1) Per-sample perturbation, enabling precise adjustments for individual samples towards either more common or more unique characteristics; 2) Importance sampling during model inference to enhance either fidelity or diversity in the generated data; 3) Fine-tuning with importance sampling, which guides the generative model to learn an adjusted distribution, thus controlling fidelity and diversity. Furthermore, our fine-tuning method demonstrates the ability to improve the Frechet Inception Distance (FID) for pre-trained generative models with minimal iterations.
翻译:本文提出一种方法,用于引导深度生成模型(如GAN和扩散模型)生成具有更高保真度或更强多样性的数据。该方法通过一种基于真实样本最近邻信息的新型单样本度量——伪密度,对训练数据与生成数据的分布进行调控。我们提供了三种具体技术来调节深度生成模型的保真度与多样性:1)单样本扰动,可针对单个样本向更具普遍性或更独特特征的方向进行精确调整;2)模型推断过程中的重要性采样,以提升生成数据的保真度或多样性;3)结合重要性采样的微调方法,引导生成模型学习调整后的分布,从而实现保真度与多样性的控制。此外,我们的微调方法能够在少量迭代次数内显著提升预训练生成模型的Frechet Inception Distance(FID)指标。