In this paper, we argue that iterative computation with diffusion models offers a powerful paradigm for not only generation but also visual perception tasks. We unify tasks such as depth estimation, optical flow, and amodal segmentation under the framework of image-to-image translation, and show how diffusion models benefit from scaling training and test-time compute for these perceptual tasks. Through a careful analysis of these scaling properties, we formulate compute-optimal training and inference recipes to scale diffusion models for visual perception tasks. Our models achieve competitive performance to state-of-the-art methods using significantly less data and compute. To access our code and models, see https://scaling-diffusion-perception.github.io .
翻译:本文提出,基于扩散模型的迭代计算不仅为生成任务,也为视觉感知任务提供了一个强大的范式。我们将深度估计、光流和不可见部分分割等任务统一到图像到图像转换的框架下,并展示了扩散模型如何通过扩展训练和测试时的计算量来提升这些感知任务的性能。通过对这些缩放特性进行细致分析,我们制定了计算最优的训练与推理方案,以扩展扩散模型在视觉感知任务中的应用。我们的模型仅使用显著更少的数据和计算量,就达到了与最先进方法相竞争的性能。代码和模型可通过 https://scaling-diffusion-perception.github.io 获取。