3D editing plays a crucial role in editing and reusing existing 3D assets, thereby enhancing productivity. Recently, 3DGS-based methods have gained increasing attention due to their efficient rendering and flexibility. However, achieving desired 3D editing results often requires multiple adjustments in an iterative loop, resulting in tens of minutes of training time cost for each attempt and a cumbersome trial-and-error cycle for users. This in-the-loop training paradigm results in a poor user experience. To address this issue, we introduce the concept of process-oriented modelling for 3D editing and propose the Progressive Gaussian Differential Field (ProGDF), an out-of-loop training approach that requires only a single training session to provide users with controllable editing capability and variable editing results through a user-friendly interface in real-time. ProGDF consists of two key components: Progressive Gaussian Splatting (PGS) and Gaussian Differential Field (GDF). PGS introduces the progressive constraint to extract the diverse intermediate results of the editing process and employs rendering quality regularization to improve the quality of these results. Based on these intermediate results, GDF leverages a lightweight neural network to model the editing process. Extensive results on two novel applications, namely controllable 3D editing and flexible fine-grained 3D manipulation, demonstrate the effectiveness, practicality and flexibility of the proposed ProGDF.
翻译:三维编辑在编辑和重用现有三维资产方面发挥着至关重要的作用,从而提升生产效率。近年来,基于3DGS的方法因其高效的渲染能力和灵活性而受到越来越多的关注。然而,实现理想的三维编辑效果通常需要在迭代循环中进行多次调整,导致每次尝试耗费数十分钟的训练时间,并为用户带来繁琐的试错周期。这种“在循环中训练”的范式导致用户体验不佳。为解决此问题,我们引入了面向过程建模的三维编辑概念,并提出了渐进式高斯差分场(ProGDF)。这是一种“跳出循环”的训练方法,仅需单次训练即可通过用户友好的界面,实时为用户提供可控的编辑能力和可变的编辑结果。ProGDF包含两个关键组件:渐进式高斯泼溅(PGS)和高斯差分场(GDF)。PGS引入渐进式约束以提取编辑过程中多样化的中间结果,并采用渲染质量正则化来提升这些结果的质量。基于这些中间结果,GDF利用一个轻量级神经网络对编辑过程进行建模。在可控三维编辑和灵活细粒度三维操作这两个新颖应用上的大量实验结果,证明了所提ProGDF方法的有效性、实用性和灵活性。