RGB-D cameras are crucial in robotic perception, given their ability to produce images augmented with depth data. However, their limited FOV often requires multiple cameras to cover a broader area. In multi-camera RGB-D setups, the goal is typically to reduce camera overlap, optimizing spatial coverage with as few cameras as possible. The extrinsic calibration of these systems introduces additional complexities. Existing methods for extrinsic calibration either necessitate specific tools or highly depend on the accuracy of camera motion estimation. To address these issues, we present PeLiCal, a novel line-based calibration approach for RGB-D camera systems exhibiting limited overlap. Our method leverages long line features from surroundings, and filters out outliers with a novel convergence voting algorithm, achieving targetless, real-time, and outlier-robust performance compared to existing methods. We open source our implementation on \url{https://github.com/joomeok/PeLiCal.git}.
翻译:RGB-D相机因其能生成增强深度数据的图像,在机器人感知领域至关重要。然而,其有限的视场角常需多台相机协同覆盖更广区域。在多相机RGB-D系统中,通常目标是通过尽可能少的相机减少重叠,以优化空间覆盖。这类系统的外参标定引入了额外复杂性。现有外参标定方法要么需要特定标靶工具,要么高度依赖相机运动估计的精度。为解决这些问题,我们提出PeLiCal——一种针对低重叠RGB-D相机系统的新型线特征标定方法。该方法利用环境中的长线特征,并通过新颖的收敛投票算法滤除离群值,相较于现有方法实现了无需标靶、实时且抗离群的性能。我们在\url{https://github.com/joomeok/PeLiCal.git}上开源了实现代码。