Recovering metric depth from a single image remains a fundamental challenge in computer vision, requiring both scene understanding and accurate scaling. While deep learning has advanced monocular depth estimation, current models often struggle with unfamiliar scenes and layouts, particularly in zero-shot scenarios and when predicting scale-ergodic metric depth. We present MetricGold, a novel approach that harnesses generative diffusion model's rich priors to improve metric depth estimation. Building upon recent advances in MariGold, DDVM and Depth Anything V2 respectively, our method combines latent diffusion, log-scaled metric depth representation, and synthetic data training. MetricGold achieves efficient training on a single RTX 3090 within two days using photo-realistic synthetic data from HyperSIM, VirtualKitti, and TartanAir. Our experiments demonstrate robust generalization across diverse datasets, producing sharper and higher quality metric depth estimates compared to existing approaches.
翻译:从单张图像恢复度量深度仍然是计算机视觉中的一个基本挑战,这既需要场景理解,也需要精确的缩放。虽然深度学习已经推进了单目深度估计,但当前模型在处理不熟悉的场景和布局时常常遇到困难,特别是在零样本场景以及预测尺度遍历的度量深度时。我们提出了MetricGold,一种利用生成扩散模型丰富先验知识来改进度量深度估计的新方法。我们的方法分别基于MariGold、DDVM和Depth Anything V2的最新进展,结合了潜在扩散、对数尺度度量深度表示以及合成数据训练。MetricGold利用来自HyperSIM、VirtualKitti和TartanAir的逼真合成数据,在单个RTX 3090上两天内实现了高效训练。我们的实验证明了其在多样化数据集上具有鲁棒的泛化能力,与现有方法相比,能产生更清晰、更高质量的度量深度估计。