Leveraging the visual priors of pre-trained text-to-image diffusion models offers a promising solution to enhance zero-shot generalization in dense prediction tasks. However, existing methods often uncritically use the original diffusion formulation, which may not be optimal due to the fundamental differences between dense prediction and image generation. In this paper, we provide a systemic analysis of the diffusion formulation for the dense prediction, focusing on both quality and efficiency. And we find that the original parameterization type for image generation, which learns to predict noise, is harmful for dense prediction; the multi-step noising/denoising diffusion process is also unnecessary and challenging to optimize. Based on these insights, we introduce Lotus, a diffusion-based visual foundation model with a simple yet effective adaptation protocol for dense prediction. Specifically, Lotus is trained to directly predict annotations instead of noise, thereby avoiding harmful variance. We also reformulate the diffusion process into a single-step procedure, simplifying optimization and significantly boosting inference speed. Additionally, we introduce a novel tuning strategy called detail preserver, which achieves more accurate and fine-grained predictions. Without scaling up the training data or model capacity, Lotus achieves SoTA performance in zero-shot depth and normal estimation across various datasets. It also significantly enhances efficiency, being hundreds of times faster than most existing diffusion-based methods.
翻译:利用预训练文本到图像扩散模型的视觉先验,为增强密集预测任务的零样本泛化能力提供了一种有前景的解决方案。然而,现有方法往往不加批判地沿用原始扩散公式,由于密集预测与图像生成之间存在根本差异,这可能并非最优选择。本文对密集预测的扩散公式进行了系统性分析,重点关注质量与效率。我们发现,原始图像生成所采用的预测噪声的参数化类型对密集预测有害;多步加噪/去噪的扩散过程也非必需且难以优化。基于这些洞见,我们提出了Lotus,一种基于扩散的视觉基础模型,其配备了一个简单而有效的密集预测适应协议。具体而言,Lotus被训练为直接预测标注而非噪声,从而避免了有害的方差。我们还将扩散过程重新表述为单步流程,简化了优化并显著提升了推理速度。此外,我们引入了一种称为细节保持器的新型调优策略,以实现更准确和细粒度的预测。在未扩大训练数据或模型容量的情况下,Lotus在多个数据集上的零样本深度估计和法线估计任务中取得了最先进的性能。其效率也显著提升,比大多数现有基于扩散的方法快数百倍。