3D Gaussian Splatting (3DGS) has emerged as a powerful technique for real-time novel view synthesis. As an explicit representation optimized through gradient propagation among primitives, optimization widely accepted in deep neural networks (DNNs) is actually adopted in 3DGS, such as synchronous weight updating and Adam with the adaptive gradient. However, considering the physical significance and specific design in 3DGS, there are two overlooked details in the optimization of 3DGS: (i) update step coupling, which induces optimizer state rescaling and costly attribute updates outside the viewpoints, and (ii) gradient coupling in the moment, which may lead to under- or over-effective regularization. Nevertheless, such a complex coupling is under-explored. After revisiting the optimization of 3DGS, we take a step to decouple it and recompose the process into: Sparse Adam, Re-State Regularization and Decoupled Attribute Regularization. Taking a large number of experiments under the 3DGS and 3DGS-MCMC frameworks, our work provides a deeper understanding of these components. Finally, based on the empirical analysis, we re-design the optimization and propose AdamW-GS by re-coupling the beneficial components, under which better optimization efficiency and representation effectiveness are achieved simultaneously.
翻译:3D高斯泼溅(3DGS)已成为实时新视角合成的强大技术。作为一种通过图元间梯度传播优化的显式表示,3DGS实际采用了深度神经网络(DNNs)中广泛接受的优化方法,例如同步权重更新和自适应梯度的Adam优化器。然而,考虑到3DGS的物理意义和特定设计,其优化过程中存在两个被忽视的细节:(i)更新步长耦合,这导致优化器状态重新缩放以及在视角外进行代价高昂的属性更新;(ii)矩估计中的梯度耦合,可能导致正则化效果不足或过度。尽管如此,这种复杂的耦合机制尚未得到充分探索。在重新审视3DGS的优化过程后,我们迈出了解耦优化的步伐,将过程重构为:稀疏Adam优化、状态重置正则化和解耦属性正则化。通过在3DGS和3DGS-MCMC框架下进行大量实验,我们的研究为这些组件提供了更深入的理解。最终,基于实证分析,我们通过重新耦合有益组件重新设计了优化方法,提出了AdamW-GS优化器,在该方法下同时实现了更优的优化效率和表示效果。