With the advancement of computer vision, the recently emerged 3D Gaussian Splatting (3DGS) has increasingly become a popular scene reconstruction algorithm due to its outstanding performance. Distributed 3DGS can efficiently utilize edge devices to directly train on the collected images, thereby offloading computational demands and enhancing efficiency. However, traditional distributed frameworks often overlook computational and communication challenges in real-world environments, hindering large-scale deployment and potentially posing privacy risks. In this paper, we propose Radiant, a hierarchical 3DGS algorithm designed for large-scale scene reconstruction that considers system heterogeneity, enhancing the model performance and training efficiency. Via extensive empirical study, we find that it is crucial to partition the regions for each edge appropriately and allocate varying camera positions to each device for image collection and training. The core of Radiant is partitioning regions based on heterogeneous environment information and allocating workloads to each device accordingly. Furthermore, we provide a 3DGS model aggregation algorithm that enhances the quality and ensures the continuity of models' boundaries. Finally, we develop a testbed, and experiments demonstrate that Radiant improved reconstruction quality by up to 25.7\% and reduced up to 79.6\% end-to-end latency.
翻译:随着计算机视觉的进步,近期兴起的3D高斯泼溅(3DGS)因其出色的性能,日益成为一种流行的场景重建算法。分布式3DGS能够高效利用边缘设备直接在采集的图像上进行训练,从而卸载计算需求并提升效率。然而,传统分布式框架往往忽视现实环境中的计算与通信挑战,阻碍了大规模部署,并可能带来隐私风险。本文提出Radiant,一种专为大规模场景重建设计的、考虑系统异构性的分层式3DGS算法,旨在提升模型性能与训练效率。通过广泛的实证研究,我们发现,为每个边缘节点恰当地划分区域,并为每个设备分配不同的相机位置以进行图像采集和训练至关重要。Radiant的核心在于基于异构环境信息划分区域,并据此为每个设备分配工作量。此外,我们提供了一种3DGS模型聚合算法,该算法提升了模型质量并确保了模型边界的连续性。最后,我们搭建了测试平台,实验表明,Radiant将重建质量提升了最高25.7%,并将端到端延迟降低了最高79.6%。