In the absence of parallax cues, a learning-based single image depth estimation (SIDE) model relies heavily on shading and contextual cues in the image. While this simplicity is attractive, it is necessary to train such models on large and varied datasets, which are difficult to capture. It has been shown that using embeddings from pre-trained foundational models, such as CLIP, improves zero shot transfer in several applications. Taking inspiration from this, in our paper we explore the use of global image priors generated from a pre-trained ViT model to provide more detailed contextual information. We argue that the embedding vector from a ViT model, pre-trained on a large dataset, captures greater relevant information for SIDE than the usual route of generating pseudo image captions, followed by CLIP based text embeddings. Based on this idea, we propose a new SIDE model using a diffusion backbone which is conditioned on ViT embeddings. Our proposed design establishes a new state-of-the-art (SOTA) for SIDE on NYUv2 dataset, achieving Abs Rel error of 0.059 (14% improvement) compared to 0.069 by the current SOTA (VPD). And on KITTI dataset, achieving Sq Rel error of 0.139 (2% improvement) compared to 0.142 by the current SOTA (GEDepth). For zero-shot transfer with a model trained on NYUv2, we report mean relative improvement of (20%, 23%, 81%, 25%) over NeWCRFs on (Sun-RGBD, iBims1, DIODE, HyperSim) datasets, compared to (16%, 18%, 45%, 9%) by ZoeDepth. The project page is available at https://ecodepth-iitd.github.io
翻译:在缺乏视差线索的情况下,基于学习的单图像深度估计(SIDE)模型严重依赖图像中的明暗与上下文线索。尽管这种简洁性颇具吸引力,但需在难以捕获的大规模多样化数据集上训练此类模型。已有研究表明,使用预训练基础模型(如CLIP)的嵌入表示可提升多项应用中的零样本迁移能力。受此启发,本文探索利用预训练ViT模型生成的全局图像先验以提供更丰富的上下文信息。我们论证:相比常规的生成伪图像描述再提取CLIP文本嵌入的流程,基于大规模数据集预训练的ViT模型嵌入向量能捕获更多SIDE相关特征。基于这一思想,我们提出以ViT嵌入为条件的扩散骨干网络构建新型SIDE模型。所提方案在NYUv2数据集上实现了SIDE领域新里程碑:绝对相对误差(Abs Rel)达0.059(较现有最佳模型VPD的0.069提升14%);在KITTI数据集上平方相对误差(Sq Rel)达0.139(较现有最佳模型GEDepth的0.142提升2%)。在零样本迁移实验中,使用NYUv2训练的模型在(Sun-RGBD, iBims1, DIODE, HyperSim)数据集上相对于NeWCRFs的平均提升率为(20%, 23%, 81%, 25%),优于ZoeDepth的(16%, 18%, 45%, 9%)。项目主页详见https://ecodepth-iitd.github.io