A Colored point cloud, as a simple and efficient 3D representation, has many advantages in various fields, including robotic navigation and scene reconstruction. This representation is now commonly used in 3D reconstruction tasks relying on cameras and LiDARs. However, fusing data from these two types of sensors is poorly performed in many existing frameworks, leading to unsatisfactory mapping results, mainly due to inaccurate camera poses. This paper presents OmniColor, a novel and efficient algorithm to colorize point clouds using an independent 360-degree camera. Given a LiDAR-based point cloud and a sequence of panorama images with initial coarse camera poses, our objective is to jointly optimize the poses of all frames for mapping images onto geometric reconstructions. Our pipeline works in an off-the-shelf manner that does not require any feature extraction or matching process. Instead, we find optimal poses by directly maximizing the photometric consistency of LiDAR maps. In experiments, we show that our method can overcome the severe visual distortion of omnidirectional images and greatly benefit from the wide field of view (FOV) of 360-degree cameras to reconstruct various scenarios with accuracy and stability. The code will be released at https://github.com/liubonan123/OmniColor/.
翻译:彩色点云作为一种简洁高效的三维表示,在机器人导航、场景重建等多个领域具有显著优势。目前,基于相机与激光雷达(LiDAR)的三维重建任务已广泛采用这种表示形式。然而,现有许多框架在融合这两类传感器数据时效果欠佳,导致重建结果不尽如人意,其主要原因在于相机位姿估计不准确。本文提出OmniColor——一种基于独立360度相机实现点云着色的新颖高效算法。给定基于LiDAR的点云及带有初始粗略相机位姿的全景图像序列,我们的目标是通过联合优化所有帧的位姿,将图像精准映射至几何重建结果上。本方法采用即用型流程,无需任何特征提取或匹配步骤,而是通过直接最大化LiDAR地图的光度一致性来求解最优位姿。实验表明,我们的方法能够克服全景图像严重的视觉畸变,并充分利用360度相机宽广的视场(FOV),以高精度和稳定性重建多样化场景。代码将在https://github.com/liubonan123/OmniColor/发布。