A multi-preconditioned LBFGS (MP-LBFGS) algorithm is introduced for training finite-basis physics-informed neural networks (FBPINNs). The algorithm is motivated by the nonlinear additive Schwarz method and exploits the domain-decomposition-inspired additive architecture of FBPINNs, in which local neural networks are defined on subdomains, thereby localizing the network representation. Parallel, subdomain-local quasi-Newton corrections are then constructed on the corresponding local parts of the architecture. A key feature is a novel nonlinear multi-preconditioning mechanism, in which subdomain corrections are optimally combined through the solution of a low-dimensional subspace minimization problem. Numerical experiments indicate that MP-LBFGS can improve convergence speed, as well as model accuracy over standard LBFGS while incurring lower communication overhead.
翻译:本文提出了一种多预条件LBFGS(MP-LBFGS)算法,用于训练有限基物理信息神经网络(FBPINNs)。该算法受非线性加性施瓦茨方法启发,充分利用了FBPINNs中受区域分解启发的加性架构——该架构在子区域上定义局部神经网络,从而实现了网络表示的局部化。随后,在架构的对应局部部分构建并行的、子区域局部拟牛顿修正。其关键特征是一种新颖的非线性多预条件机制,该机制通过求解低维子空间最小化问题,将子区域修正进行最优组合。数值实验表明,与标准LBFGS相比,MP-LBFGS在降低通信开销的同时,能够提升收敛速度与模型精度。