We present 3DGS-CD, the first 3D Gaussian Splatting (3DGS)-based method for detecting physical object rearrangements in 3D scenes. Our approach estimates 3D object-level changes by comparing two sets of unaligned images taken at different times. Leveraging 3DGS's novel view rendering and EfficientSAM's zero-shot segmentation capabilities, we detect 2D object-level changes, which are then associated and fused across views to estimate 3D changes. Our method can detect changes in cluttered environments using sparse post-change images within as little as 18s, using as few as a single new image. It does not rely on depth input, user instructions, object classes, or object models -- An object is recognized simply if it has been re-arranged. Our approach is evaluated on both public and self-collected real-world datasets, achieving up to 14% higher accuracy and three orders of magnitude faster performance compared to the state-of-the-art radiance-field-based change detection method. This significant performance boost enables a broad range of downstream applications, where we highlight three key use cases: object reconstruction, robot workspace reset, and 3DGS model update. Our code and data will be made available at https://github.com/520xyxyzq/3DGS-CD.
翻译:本文提出3DGS-CD,这是首个基于3D高斯泼溅(3DGS)的、用于检测三维场景中物理物体重排变化的方法。我们的方法通过比较在不同时间拍摄的两组未对齐图像来估计三维物体级变化。利用3DGS的新视角渲染能力和EfficientSAM的零样本分割能力,我们首先检测二维物体级变化,随后通过跨视角关联与融合来估计三维变化。该方法能够在杂乱环境中使用稀疏的后变化图像(最少仅需单张新图像)在18秒内完成变化检测,且不依赖深度输入、用户指令、物体类别或物体模型——仅通过物体是否被重新排列即可进行识别。我们在公开数据集和自采集的真实世界数据集上评估了本方法,相较于当前最先进的基于辐射场的变化检测方法,其准确率最高提升14%,且运行速度提升了三个数量级。这一显著的性能提升为多种下游应用开辟了道路,我们重点展示了三个关键用例:物体重建、机器人工作空间重置以及3DGS模型更新。我们的代码与数据将在https://github.com/520xyxyzq/3DGS-CD 公开。