Extreme learning machines (ELMs), which preset hidden layer parameters and solve for last layer coefficients via a least squares method, can typically solve partial differential equations faster and more accurately than Physics Informed Neural Networks. However, they remain computationally expensive when high accuracy requires large least squares problems to be solved. Domain decomposition methods (DDMs) for ELMs have allowed parallel computation to reduce training times of large systems. This paper constructs a coarse space for ELMs, which enables further acceleration of their training. By partitioning interface variables into coarse and non-coarse variables, selective elimination introduces a Schur complement system on the non-coarse variables with the coarse problem embedded. Key to the performance of the proposed method is a Neumann-Neumann acceleration that utilizes the coarse space. Numerical experiments demonstrate significant speedup compared to a previous DDM method for ELMs.
翻译:极限学习机(ELMs)通过预设隐藏层参数并采用最小二乘法求解最后一层系数,通常能比物理信息神经网络更快、更精确地求解偏微分方程。然而,当高精度要求求解大规模最小二乘问题时,其计算成本仍然高昂。针对ELMs的区域分解方法(DDMs)已允许通过并行计算来减少大型系统的训练时间。本文为ELMs构建了一个粗空间,从而能够进一步加速其训练过程。通过将界面变量划分为粗变量与非粗变量,选择性消元法在嵌入了粗问题的基础上,针对非粗变量引入了Schur补系统。所提方法性能的关键在于一种利用该粗空间的Neumann-Neumann加速技术。数值实验表明,与先前的ELMs区域分解方法相比,本方法实现了显著的加速效果。