3D meshes are a fundamental representation widely used in computer science and engineering. In robotics, they are particularly valuable because they capture objects in a form that aligns directly with how robots interact with the physical world, enabling core capabilities such as predicting stable grasps, detecting collisions, and simulating dynamics. Although automatic 3D mesh generation methods have shown promising progress in recent years, potentially offering a path toward real-time robot perception, two critical challenges remain. First, generating high-fidelity meshes is prohibitively slow for real-time use, often requiring tens of seconds per object. Second, mesh generation by itself is insufficient. In robotics, a mesh must be contextually grounded, i.e., correctly segmented from the scene and registered with the proper scale and pose. Additionally, unless these contextual grounding steps remain efficient, they simply introduce new bottlenecks. In this work, we introduce an end-to-end system that addresses these challenges, producing a high-quality, contextually grounded 3D mesh from a single RGB-D image in under one second. Our pipeline integrates open-vocabulary object segmentation, accelerated diffusion-based mesh generation, and robust point cloud registration, each optimized for both speed and accuracy. We demonstrate its effectiveness in a real-world manipulation task, showing that it enables meshes to be used as a practical, on-demand representation for robotics perception and planning.
翻译:三维网格是计算机科学与工程领域广泛使用的基础表示方法。在机器人学中,其价值尤为突出,因为这种表示形式能直接对应机器人与物理世界的交互方式,为实现稳定抓取预测、碰撞检测和动力学模拟等核心功能提供了可能。尽管近年来自动三维网格生成方法取得了显著进展,为实现实时机器人感知提供了潜在路径,但仍面临两大关键挑战:首先,生成高保真网格的速度难以满足实时性要求,单个物体通常需要数十秒处理时间;其次,单纯的网格生成并不充分。在机器人应用中,网格必须具有场景上下文基础,即需要从场景中正确分割并配准至合适的尺度与位姿。此外,若这些上下文配准步骤效率不足,反而会形成新的性能瓶颈。本研究提出了一种端到端系统以应对这些挑战,该系统能在单帧RGB-D图像输入后一秒钟内生成具有场景上下文的高质量三维网格。我们的流程整合了开放词汇目标分割、加速扩散式网格生成和鲁棒点云配准三大模块,每个模块均在速度与精度上进行了协同优化。通过真实世界操作任务的验证,我们证明了该系统能使网格成为机器人感知与规划中实用、按需可得的有效表示形式。