Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by the baseline between cameras. Single- and multi-view monocular depth offers a more compact alternative, but is constrained by the unobservability of the metric scale. Light field imaging provides a promising solution for estimating metric depth by using a unique lens configuration through a single device. However, its application to single-view dense metric depth is under-addressed mainly due to the technology's high cost, the lack of public benchmarks, and proprietary geometrical models and software. Our work explores the potential of focused plenoptic cameras for dense metric depth. We propose a novel pipeline that predicts metric depth from a single plenoptic camera shot by first generating a sparse metric point cloud using machine learning, which is then used to scale and align a dense relative depth map regressed by a foundation depth model, resulting in dense metric depth. To validate it, we curated the Light Field & Stereo Image Dataset (LFS) of real-world light field images with stereo depth labels, filling a current gap in existing resources. Experimental results show that our pipeline produces accurate metric depth predictions, laying a solid groundwork for future research in this field.
翻译:从视觉传感器获取度量深度对于机器人感知、导航和环境交互至关重要。传统的测距成像装置(如立体相机或结构光相机)面临标定、遮挡和硬件需求等难题,其精度受限于相机间的基线距离。单目与多视角单目深度估计提供了更紧凑的替代方案,但受限于度量尺度的不可观测性。光场成像通过单一设备的特殊透镜配置,为度量深度估计提供了前景广阔的解决方案。然而,其在单视角稠密度量深度中的应用尚未得到充分探索,主要受限于技术的高成本、公开基准数据的缺乏以及专有的几何模型和软件。本研究探索了聚焦光场相机在稠密度量深度估计中的潜力。我们提出了一种新颖的处理流程,通过单次光场相机拍摄预测度量深度:首先利用机器学习生成稀疏度量点云,随后将其用于对基础深度模型回归得到的稠密相对深度图进行尺度缩放与对齐,最终获得稠密度量深度。为验证该方法,我们构建了包含真实世界光场图像及立体深度标注的光场与立体图像数据集(LFS),填补了当前资源空白。实验结果表明,我们的处理流程能够生成精确的度量深度预测,为该领域的未来研究奠定了坚实基础。