We propose an inference-time scaling approach for pretrained flow models. Recently, inference-time scaling has gained significant attention in LLMs and diffusion models, improving sample quality or better aligning outputs with user preferences by leveraging additional computation. For diffusion models, particle sampling has allowed more efficient scaling due to the stochasticity at intermediate denoising steps. On the contrary, while flow models have gained popularity as an alternative to diffusion models--offering faster generation and high-quality outputs in state-of-the-art image and video generative models--efficient inference-time scaling methods used for diffusion models cannot be directly applied due to their deterministic generative process. To enable efficient inference-time scaling for flow models, we propose three key ideas: 1) SDE-based generation, enabling particle sampling in flow models, 2) Interpolant conversion, broadening the search space and enhancing sample diversity, and 3) Rollover Budget Forcing (RBF), an adaptive allocation of computational resources across timesteps to maximize budget utilization. Our experiments show that SDE-based generation, particularly variance-preserving (VP) interpolant-based generation, improves the performance of particle sampling methods for inference-time scaling in flow models. Additionally, we demonstrate that RBF with VP-SDE achieves the best performance, outperforming all previous inference-time scaling approaches.
翻译:我们提出了一种针对预训练流模型的推理时缩放方法。近期,推理时缩放在大语言模型和扩散模型中获得了显著关注,其通过利用额外计算来提升样本质量或使输出更符合用户偏好。对于扩散模型,由于中间去噪步骤的随机性,粒子采样已能实现更高效的缩放。相反,尽管流模型作为扩散模型的替代方案日益流行——在先进的图像和视频生成模型中提供更快的生成速度和高质量输出——但由于其确定性的生成过程,用于扩散模型的高效推理时缩放方法无法直接应用。为实现流模型的高效推理时缩放,我们提出了三个关键思想:1)基于随机微分方程(SDE)的生成,使流模型中能够进行粒子采样;2)插值转换,拓宽搜索空间并增强样本多样性;3)滚动预算强制(RBF),一种在时间步之间自适应分配计算资源以最大化预算利用的方法。我们的实验表明,基于SDE的生成,特别是基于方差保持(VP)插值的生成,提升了粒子采样方法在流模型中进行推理时缩放的性能。此外,我们证明了结合VP-SDE的RBF取得了最佳性能,超越了所有先前的推理时缩放方法。