This paper proposes a concise, elegant, and robust pipeline to estimate smooth camera trajectories and obtain dense point clouds for casual videos in the wild. Traditional frameworks, such as ParticleSfM~\cite{zhao2022particlesfm}, address this problem by sequentially computing the optical flow between adjacent frames to obtain point trajectories. They then remove dynamic trajectories through motion segmentation and perform global bundle adjustment. However, the process of estimating optical flow between two adjacent frames and chaining the matches can introduce cumulative errors. Additionally, motion segmentation combined with single-view depth estimation often faces challenges related to scale ambiguity. To tackle these challenges, we propose a dynamic-aware tracking any point (DATAP) method that leverages consistent video depth and point tracking. Specifically, our DATAP addresses these issues by estimating dense point tracking across the video sequence and predicting the visibility and dynamics of each point. By incorporating the consistent video depth prior, the performance of motion segmentation is enhanced. With the integration of DATAP, it becomes possible to estimate and optimize all camera poses simultaneously by performing global bundle adjustments for point tracking classified as static and visible, rather than relying on incremental camera registration. Extensive experiments on dynamic sequences, e.g., Sintel and TUM RGBD dynamic sequences, and on the wild video, e.g., DAVIS, demonstrate that the proposed method achieves state-of-the-art performance in terms of camera pose estimation even in complex dynamic challenge scenes.
翻译:本文提出了一种简洁、优雅且鲁棒的流程,用于估计平滑的相机轨迹并获取野外随意拍摄视频的稠密点云。传统框架(如ParticleSfM~\cite{zhao2022particlesfm})通过顺序计算相邻帧之间的光流来获取点轨迹,进而通过运动分割移除动态轨迹并进行全局光束法平差。然而,估计相邻两帧间光流并串联匹配点的过程会引入累积误差。此外,结合单视图深度估计的运动分割方法常面临尺度模糊性的挑战。为应对这些挑战,我们提出了一种动态感知任意点跟踪(DATAP)方法,该方法利用一致的视频深度与点跟踪信息。具体而言,我们的DATAP通过估计视频序列中的稠密点跟踪并预测每个点的可见性与动态性来解决上述问题。通过引入一致的视频深度先验,运动分割的性能得到增强。结合DATAP后,系统能够通过为被分类为静态且可见的点跟踪执行全局光束法平差,同时估计并优化所有相机位姿,而无需依赖增量式的相机配准。在动态序列(如Sintel和TUM RGBD动态序列)及野外视频(如DAVIS)上进行的大量实验表明,即使在复杂的动态挑战场景中,所提方法在相机位姿估计方面仍能达到最先进的性能水平。