Cross-modal data registration has long been a critical task in computer vision, with extensive applications in autonomous driving and robotics. Accurate and robust registration methods are essential for aligning data from different modalities, forming the foundation for multimodal sensor data fusion and enhancing perception systems' accuracy and reliability. The registration task between 2D images captured by cameras and 3D point clouds captured by Light Detection and Ranging (LiDAR) sensors is usually treated as a visual pose estimation problem. High-dimensional feature similarities from different modalities are leveraged to identify pixel-point correspondences, followed by pose estimation techniques using least squares methods. However, existing approaches often resort to downsampling the original point cloud and image data due to computational constraints, inevitably leading to a loss in precision. Additionally, high-dimensional features extracted using different feature extractors from various modalities require specific techniques to mitigate cross-modal differences for effective matching. To address these challenges, we propose a method that uses edge information from the original point clouds and images for cross-modal registration. We retain crucial information from the original data by extracting edge points and pixels, enhancing registration accuracy while maintaining computational efficiency. The use of edge points and edge pixels allows us to introduce an attention-based feature exchange block to eliminate cross-modal disparities. Furthermore, we incorporate an optimal matching layer to improve correspondence identification. We validate the accuracy of our method on the KITTI and nuScenes datasets, demonstrating its state-of-the-art performance.
翻译:跨模态数据配准长期以来是计算机视觉领域的关键任务,在自动驾驶与机器人技术中具有广泛应用。精确且鲁棒的配准方法对于对齐不同模态的数据至关重要,构成了多模态传感器数据融合的基础,并提升了感知系统的准确性与可靠性。相机捕获的二维图像与激光雷达(LiDAR)传感器捕获的三维点云之间的配准任务通常被视为视觉位姿估计问题。现有方法利用不同模态的高维特征相似性识别像素-点对应关系,随后采用最小二乘法等位姿估计技术。然而,由于计算限制,现有方法常对原始点云与图像数据进行下采样,不可避免地导致精度损失。此外,从不同模态提取的高维特征需借助特定技术以弥合跨模态差异从而实现有效匹配。为应对这些挑战,我们提出一种利用原始点云与图像边缘信息进行跨模态配准的方法。通过提取边缘点与边缘像素,我们在保持计算效率的同时保留了原始数据的关键信息,从而提升了配准精度。边缘点与边缘像素的运用使我们能够引入基于注意力的特征交换模块以消除跨模态差异。进一步地,我们整合了最优匹配层以改进对应关系识别。我们在KITTI与nuScenes数据集上验证了本方法的准确性,证明了其具备最先进的性能。