This paper proposes a real-time multi-plane segmentation method based on GPU-accelerated high-resolution 3D voxel mapping for legged robot locomotion. Existing online planar mapping approaches struggle to balance accuracy and computational efficiency: direct depth image segmentation from specific sensors suffers from poor temporal integration, height map-based methods cannot represent complex 3D structures like overhangs, and voxel-based plane segmentation remains unexplored for real-time applications. To address these limitations, we develop a novel framework that integrates vertex-based connected component labeling with random sample consensus based plane detection and convex hull, leveraging GPU parallel computing to rapidly extract planar regions from point clouds accumulated in high-resolution 3D voxel maps. Experimental results demonstrate that the proposed method achieves fast and accurate 3D multi-plane segmentation at over 30 Hz update rate even at a resolution of 0.01 m, enabling the detected planes to be utilized in real time for locomotion tasks. Furthermore, we validate the effectiveness of our approach through experiments in both simulated environments and physical legged robot platforms, confirming robust locomotion performance when considering 3D planar structures.
翻译:本文提出一种基于GPU加速高分辨率三维体素地图的腿式机器人运动实时多平面分割方法。现有在线平面建图方法难以平衡精度与计算效率:特定传感器的深度图像直接分割存在时间整合性差的问题,基于高度图的方法无法表征悬垂结构等复杂三维几何,而基于体素的平面分割方法尚未在实时应用中得到探索。为克服这些局限,我们开发了一种新颖框架,将基于顶点的连通分量标记与基于随机采样一致性的平面检测及凸包计算相结合,利用GPU并行计算从高分辨率三维体素地图累积的点云中快速提取平面区域。实验结果表明,所提方法即使在0.01米分辨率下仍能以超过30赫兹的更新速率实现快速准确的三维多平面分割,使检测到的平面能够实时应用于运动任务。此外,我们通过在仿真环境与实体腿式机器人平台上的实验验证了方法的有效性,证实了在考虑三维平面结构时具有鲁棒的运动性能。