Consistency Training (CT) has recently emerged as a promising alternative to diffusion models, achieving competitive performance in image generation tasks. However, non-distillation consistency training often suffers from high variance and instability, and analyzing and improving its training dynamics is an active area of research. In this work, we propose a novel CT training approach based on the Flow Matching framework. Our main contribution is a trained noise-coupling scheme inspired by the architecture of Variational Autoencoders (VAE). By training a data-dependent noise emission model implemented as an encoder architecture, our method can indirectly learn the geometry of the noise-to-data mapping, which is instead fixed by the choice of the forward process in classical CT. Empirical results across diverse image datasets show significant generative improvements, with our model outperforming baselines and achieving the state-of-the-art (SoTA) non-distillation CT FID on CIFAR-10, and attaining FID on par with SoTA on ImageNet at $64 \times 64$ resolution in 2-step generation. Our code is available at https://github.com/sony/vct .
翻译:一致性训练(Consistency Training,CT)作为一种新兴的扩散模型替代方法,在图像生成任务中展现出具有竞争力的性能。然而,非蒸馏一致性训练常面临高方差与不稳定性问题,其训练动态的分析与改进是当前研究的热点。本研究提出一种基于流匹配框架的新型CT训练方法。我们的核心贡献是受变分自编码器(VAE)架构启发的可训练噪声耦合方案。通过训练一个由编码器架构实现的数据依赖噪声发射模型,我们的方法能够间接学习噪声到数据映射的几何结构,而该映射在经典CT中通常由前向过程的选择固定。在多个图像数据集上的实验结果表明,该方法在生成性能上取得显著提升:我们的模型在CIFAR-10上实现了当前最优的非蒸馏CT FID指标,并在ImageNet $64 \times 64$分辨率下的2步生成中达到与当前最优方法相当的FID。代码已开源:https://github.com/sony/vct。