We tackle the challenge of efficiently reconstructing 3D scenes with high detail on objects of interest. Existing 3D Gaussian Splatting (3DGS) methods allocate resources uniformly across the scene, limiting fine detail to Regions Of Interest (ROIs) and leading to inflated model size. We propose ROI-GS, an object-aware framework that enhances local details through object-guided camera selection, targeted Object training, and seamless integration of high-fidelity object of interest reconstructions into the global scene. Our method prioritizes higher resolution details on chosen objects while maintaining real-time performance. Experiments show that ROI-GS significantly improves local quality (up to 2.96 dB PSNR), while reducing overall model size by $\approx 17\%$ of baseline and achieving faster training for a scene with a single object of interest, outperforming existing methods.
翻译:我们致力于解决对兴趣对象进行高效、高细节三维场景重建的挑战。现有的三维高斯泼溅(3DGS)方法在场景中均匀分配资源,限制了兴趣区域(ROIs)的精细细节,并导致模型规模膨胀。我们提出了ROI-GS,一种对象感知框架,通过对象引导的相机选择、针对性的对象训练,以及将高保真兴趣对象重建无缝集成到全局场景中,来增强局部细节。我们的方法在保持实时性能的同时,优先在选定对象上呈现更高分辨率的细节。实验表明,ROI-GS显著提升了局部质量(PSNR最高提升2.96 dB),同时将整体模型大小减少至基线的约17%,并在包含单个兴趣对象的场景中实现了更快的训练速度,性能优于现有方法。