Pre-trained diffusion models have emerged as powerful generative priors for both unconditional and conditional sample generation, yet their outputs often deviate from the characteristics of user-specific target data. Such mismatches are especially problematic in domain adaptation tasks, where only a few reference examples are available and retraining the diffusion model is infeasible. Existing inference-time guidance methods can adjust sampling trajectories, but they typically optimize surrogate objectives such as classifier likelihoods rather than directly aligning with the target distribution. We propose MMD Guidance, a training-free mechanism that augments the reverse diffusion process with gradients of the Maximum Mean Discrepancy (MMD) between generated samples and a reference dataset. MMD provides reliable distributional estimates from limited data, exhibits low variance in practice, and is efficiently differentiable, which makes it particularly well-suited for the guidance task. Our framework naturally extends to prompt-aware adaptation in conditional generation models via product kernels. Also, it can be applied with computational efficiency in latent diffusion models (LDMs), since guidance is applied in the latent space of the LDM. Experiments on synthetic and real-world benchmarks demonstrate that MMD Guidance can achieve distributional alignment while preserving sample fidelity.
翻译:预训练的扩散模型已成为无条件与条件样本生成的强大生成先验,但其输出常偏离用户特定目标数据的特征。此类不匹配在领域自适应任务中尤为突出,因为此类任务通常仅能获取少量参考样本且重新训练扩散模型并不可行。现有的推理时引导方法虽能调整采样轨迹,但通常优化的是分类器似然等替代目标,而非直接与目标分布对齐。本文提出MMD引导机制,这是一种无需训练的方法,通过引入生成样本与参考数据集之间最大平均差异(MMD)的梯度来增强逆向扩散过程。MMD能够基于有限数据提供可靠的分布估计,在实践中表现出较低的方差,且具备高效可微性,这使其特别适用于引导任务。我们的框架通过乘积核自然扩展至条件生成模型中的提示感知自适应。此外,该方法可高效应用于潜在扩散模型(LDM),因为引导过程在LDM的潜在空间中进行。在合成与真实基准数据集上的实验表明,MMD引导能够在保持样本保真度的同时实现分布对齐。