Video depth estimation has long been hindered by the scarcity of consistent and scalable ground truth data, leading to inconsistent and unreliable results. In this paper, we introduce Depth Any Video, a model that tackles the challenge through two key innovations. First, we develop a scalable synthetic data pipeline, capturing real-time video depth data from diverse synthetic environments, yielding 40,000 video clips of 5-second duration, each with precise depth annotations. Second, we leverage the powerful priors of generative video diffusion models to handle real-world videos effectively, integrating advanced techniques such as rotary position encoding and flow matching to further enhance flexibility and efficiency. Unlike previous models, which are limited to fixed-length video sequences, our approach introduces a novel mixed-duration training strategy that handles videos of varying lengths and performs robustly across different frame rates-even on single frames. At inference, we propose a depth interpolation method that enables our model to infer high-resolution video depth across sequences of up to 150 frames. Our model outperforms all previous generative depth models in terms of spatial accuracy and temporal consistency.
翻译:视频深度估计长期以来一直受限于一致且可扩展的真实标注数据的稀缺,导致结果不一致且不可靠。本文提出了Depth Any Video模型,该模型通过两项关键创新应对这一挑战。首先,我们开发了一个可扩展的合成数据流水线,从多样化的合成环境中捕获实时视频深度数据,生成了40,000个5秒时长的视频片段,每个片段均带有精确的深度标注。其次,我们利用生成式视频扩散模型的强大先验知识来有效处理真实世界视频,并整合了旋转位置编码和流匹配等先进技术,以进一步提升灵活性与效率。与先前仅限于固定长度视频序列的模型不同,我们的方法引入了一种新颖的混合时长训练策略,能够处理不同长度的视频,并在不同帧率下(甚至单帧图像)均表现出鲁棒性能。在推理阶段,我们提出了一种深度插值方法,使模型能够对长达150帧的序列进行高分辨率视频深度推断。我们的模型在空间精度和时间一致性方面均超越了所有先前的生成式深度模型。