Implicit neural representations and 3D Gaussian splatting (3DGS) have shown great potential for scene reconstruction. Recent studies have expanded their applications in autonomous reconstruction through task assignment methods. However, these methods are mainly limited to single robot, and rapid reconstruction of large-scale scenes remains challenging. Additionally, task-driven planning based on surface uncertainty is prone to being trapped in local optima. To this end, we propose the first 3DGS-based centralized multi-robot autonomous 3D reconstruction framework. To further reduce time cost of task generation and improve reconstruction quality, we integrate online open-vocabulary semantic segmentation with surface uncertainty of 3DGS, focusing view sampling on regions with high instance uncertainty. Finally, we develop a multi-robot collaboration strategy with mode and task assignments improving reconstruction quality while ensuring planning efficiency. Our method demonstrates the highest reconstruction quality among all planning methods and superior planning efficiency compared to existing multi-robot methods. We deploy our method on multiple robots, and results show that it can effectively plan view paths and reconstruct scenes with high quality.
翻译:隐式神经表示与三维高斯溅射(3DGS)在场景重建领域展现出巨大潜力。近期研究通过任务分配方法将其应用扩展至自主重建领域。然而,现有方法主要局限于单机器人系统,且大规模场景的快速重建仍具挑战性。此外,基于表面不确定性的任务驱动规划易陷入局部最优。为此,我们提出了首个基于3DGS的集中式多机器人自主三维重建框架。为降低任务生成时间成本并提升重建质量,我们将在线开放词汇语义分割与3DGS表面不确定性相结合,将视角采样聚焦于具有高实例不确定性的区域。最后,我们开发了包含模式与任务分配的多机器人协同策略,在保证规划效率的同时提升重建质量。实验表明,本方法在所有规划方法中取得最优重建质量,其规划效率亦优于现有多机器人方法。我们将方法部署于多机器人平台,结果表明该方法能有效规划视角路径并实现高质量场景重建。