Despite recent success in discriminative approaches in monocular depth estimation its quality remains limited by training datasets. Generative approaches mitigate this issue by leveraging strong priors derived from training on internet-scale datasets. Recent studies have demonstrated that large text-to-image diffusion models achieve state-of-the-art results in depth estimation when fine-tuned on small depth datasets. Concurrently, autoregressive generative approaches, such as the Visual AutoRegressive modeling~(VAR), have shown promising results in conditioned image synthesis. Following the visual autoregressive modeling paradigm, we introduce the first autoregressive depth estimation model based on the visual autoregressive transformer. Our primary contribution is DepthART -- a novel training method formulated as Depth Autoregressive Refinement Task. Unlike the original VAR training procedure, which employs static targets, our method utilizes a dynamic target formulation that enables model self-refinement and incorporates multi-modal guidance during training. Specifically, we use model predictions as inputs instead of ground truth token maps during training, framing the objective as residual minimization. Our experiments demonstrate that the proposed training approach significantly outperforms visual autoregressive modeling via next-scale prediction in the depth estimation task. The Visual Autoregressive Transformer trained with our approach on Hypersim achieves superior results on a set of unseen benchmarks compared to other generative and discriminative baselines.
翻译:尽管单目深度估计中的判别式方法近期取得了成功,但其质量仍受限于训练数据集。生成式方法通过利用在互联网规模数据集上训练得到的强先验知识,缓解了这一问题。最近的研究表明,大型文本到图像扩散模型在小型深度数据集上微调后,能够在深度估计任务中取得最先进的结果。与此同时,自回归生成式方法,如视觉自回归建模(VAR),在条件图像合成中已显示出有前景的结果。遵循视觉自回归建模范式,我们提出了首个基于视觉自回归Transformer的自回归深度估计模型。我们的主要贡献是DepthART——一种新颖的训练方法,其被构建为深度自回归精炼任务。与采用静态目标的原始VAR训练过程不同,我们的方法利用动态目标构建,使模型能够进行自我精炼,并在训练过程中融入多模态指导。具体而言,我们在训练中使用模型预测而非真实标记图作为输入,从而将目标构建为残差最小化问题。我们的实验表明,所提出的训练方法在深度估计任务中,显著优于通过下一尺度预测进行的视觉自回归建模。使用我们的方法在Hypersim数据集上训练的视觉自回归Transformer,在一系列未见过的基准测试中,相比其他生成式和判别式基线模型,取得了更优的结果。