We study the problem of fusing pre-trained (auxiliary) generative models to enhance the training of a target generative model. We propose using KL-divergence weighted barycenters as an optimal fusion mechanism, in which the barycenter weights are optimally trained to minimize a suitable loss for the target population. While computing the optimal KL-barycenter weights can be challenging, we demonstrate that this process can be efficiently executed using diffusion score training when the auxiliary generative models are also trained based on diffusion score methods. Moreover, we show that our fusion method has a dimension-free sample complexity in total variation distance provided that the auxiliary models are well fitted for their own task and the auxiliary tasks combined capture the target well. The main takeaway of our method is that if the auxiliary models are well-trained and can borrow features from each other that are present in the target, our fusion method significantly improves the training of generative models. We provide a concise computational implementation of the fusion algorithm, and validate its efficiency in the low-data regime with numerical experiments involving mixtures models and image datasets.
翻译:本文研究融合预训练(辅助)生成模型以增强目标生成模型训练的问题。我们提出使用KL散度加权重心作为最优融合机制,其中重心权重经过优化训练以最小化目标群体的合适损失。尽管计算最优KL重心权重可能具有挑战性,但我们证明当辅助生成模型同样基于扩散分数方法训练时,该过程可通过扩散分数训练高效执行。此外,我们证明若辅助模型在其自身任务上拟合良好,且辅助任务组合能充分捕捉目标特征,则我们的融合方法在总变差距离上具有与维度无关的样本复杂度。本方法的核心结论是:若辅助模型训练充分且能相互借鉴目标中存在的特征,我们的融合方法将显著提升生成模型的训练效果。我们提供了融合算法的简明计算实现,并通过混合模型和图像数据集的数值实验验证了其在低数据区域下的高效性。