With the emergence of large-scale Text-to-Image(T2I) models and implicit 3D representations like Neural Radiance Fields (NeRF), many text-driven generative editing methods based on NeRF have appeared. However, the implicit encoding of geometric and textural information poses challenges in accurately locating and controlling objects during editing. Recently, significant advancements have been made in the editing methods of 3D Gaussian Splatting, a real-time rendering technology that relies on explicit representation. However, these methods still suffer from issues including inaccurate localization and limited manipulation over editing. To tackle these challenges, we propose GSEditPro, a novel 3D scene editing framework which allows users to perform various creative and precise editing using text prompts only. Leveraging the explicit nature of the 3D Gaussian distribution, we introduce an attention-based progressive localization module to add semantic labels to each Gaussian during rendering. This enables precise localization on editing areas by classifying Gaussians based on their relevance to the editing prompts derived from cross-attention layers of the T2I model. Furthermore, we present an innovative editing optimization method based on 3D Gaussian Splatting, obtaining stable and refined editing results through the guidance of Score Distillation Sampling and pseudo ground truth. We prove the efficacy of our method through extensive experiments.
翻译:随着大规模文本到图像(T2I)模型以及神经辐射场(NeRF)等隐式3D表征的出现,涌现出许多基于NeRF的文本驱动生成式编辑方法。然而,几何与纹理信息的隐式编码给编辑过程中的精确定位与控制带来了挑战。近来,基于显式表征的实时渲染技术——3D高斯泼溅的编辑方法取得了显著进展,但这些方法仍存在定位不准确、编辑操控受限等问题。为应对这些挑战,我们提出了GSEditPro,一种新颖的3D场景编辑框架,允许用户仅通过文本提示进行多样化的创意与精确编辑。利用3D高斯分布的显式特性,我们引入了一个基于注意力的渐进式定位模块,在渲染过程中为每个高斯添加语义标签。该模块通过基于高斯与T2I模型交叉注意力层导出的编辑提示的相关性对其进行分类,从而实现对编辑区域的精确定位。此外,我们提出了一种基于3D高斯泼溅的创新编辑优化方法,通过分数蒸馏采样与伪真实数据的引导,获得稳定且精细的编辑结果。我们通过大量实验验证了本方法的有效性。