Spatial visual perception is a fundamental requirement in physical-world applications like autonomous driving and robotic manipulation, driven by the need to interact with 3D environments. Capturing pixel-aligned metric depth using RGB-D cameras would be the most viable way, yet it usually faces obstacles posed by hardware limitations and challenging imaging conditions, especially in the presence of specular or texture-less surfaces. In this work, we argue that the inaccuracies from depth sensors can be viewed as "masked" signals that inherently reflect underlying geometric ambiguities. Building on this motivation, we present LingBot-Depth, a depth completion model which leverages visual context to refine depth maps through masked depth modeling and incorporates an automated data curation pipeline for scalable training. It is encouraging to see that our model outperforms top-tier RGB-D cameras in terms of both depth precision and pixel coverage. Experimental results on a range of downstream tasks further suggest that LingBot-Depth offers an aligned latent representation across RGB and depth modalities. We release the code, checkpoint, and 3M RGB-depth pairs (including 2M real data and 1M simulated data) to the community of spatial perception.
翻译:空间视觉感知是自动驾驶与机器人操作等物理世界应用中的基本需求,其核心在于与三维环境的交互。使用RGB-D相机获取像素对齐的度量深度本是最可行的途径,但该方法常受硬件限制与复杂成像条件(尤其在镜面或缺乏纹理的表面)的制约。本研究提出,深度传感器的误差可视为反映底层几何模糊性的“掩码”信号。基于此动机,我们提出LingBot-Depth——一种通过掩码深度建模、利用视觉上下文优化深度图的深度补全模型,并集成了可扩展训练的自适应数据筛选流程。令人鼓舞的是,该模型在深度精度与像素覆盖率方面均优于顶级RGB-D相机。一系列下游任务的实验结果表明,LingBot-Depth能够提供跨RGB与深度模态的对齐隐式表征。我们向空间感知研究社区公开了代码、模型检查点及300万组RGB-深度数据对(含200万真实数据与100万模拟数据)。