3D Gaussian Splatting (3DGS) has recently gained popularity for efficient scene rendering by representing scenes as explicit sets of anisotropic 3D Gaussians. However, most existing work focuses primarily on modeling external surfaces. In this work, we target the reconstruction of internal scenes, which is crucial for applications that require a deep understanding of an object's interior. By directly modeling a continuous volumetric density through the inner 3D Gaussian distribution, our model effectively reconstructs smooth and detailed internal structures from sparse sliced data. Beyond high-fidelity reconstruction, we further demonstrate the framework's potential for downstream tasks such as segmentation. By integrating language features, we extend our approach to enable text-guided segmentation of medical scenes via natural language queries. Our approach eliminates the need for camera poses, is plug-and-play, and is inherently compatible with any data modalities. We provide cuda implementation at: https://github.com/Shuxin-Liang/InnerGS.
翻译:3D高斯泼溅(3DGS)近期因其通过显式各向异性3D高斯集合表示场景的高效渲染能力而广受关注。然而,现有研究主要集中于外部表面建模。本研究针对内部场景的重建问题,这对需要深入理解物体内部结构的应用至关重要。通过内部3D高斯分布直接建模连续体密度,我们的模型能够从稀疏切片数据中有效重建平滑且精细的内部结构。除高保真重建外,我们进一步展示了该框架在分割等下游任务中的潜力。通过融合语言特征,我们将方法扩展至支持通过自然语言查询实现医学场景的文本引导分割。本方法无需相机位姿、具备即插即用特性,且天然兼容任意数据模态。CUDA实现代码发布于:https://github.com/Shuxin-Liang/InnerGS。