We explore the use of the Gauss-Newton method for optimization in shape learning, including implicit neural surfaces and geometry-informed neural networks. The method addresses key challenges in shape learning, such as the ill-conditioning of the underlying differential constraints and the mismatch between the optimization problem in parameter space and the function space where the problem is naturally posed. This leads to significantly faster and more stable convergence than standard first-order methods, while also requiring far fewer iterations. Experiments across benchmark shape optimization tasks demonstrate that the Gauss-Newton method consistently improves both training speed and final solution accuracy.
翻译:本文探讨了高斯-牛顿方法在形状学习优化中的应用,涵盖隐式神经曲面和几何感知神经网络。该方法解决了形状学习中的关键挑战,例如基础微分约束的病态性,以及参数空间中的优化问题与问题自然所在函数空间之间的不匹配。相较于标准一阶优化方法,本方法实现了显著更快且更稳定的收敛,同时所需迭代次数大幅减少。在多个基准形状优化任务上的实验表明,高斯-牛顿方法持续提升了训练速度与最终解的精度。