Extreme learning machine (ELM) is a methodology for solving partial differential equations (PDEs) using a single hidden layer feed-forward neural network. It presets the weight/bias coefficients in the hidden layer with random values, which remain fixed throughout the computation, and uses a linear least squares method for training the parameters of the output layer of the neural network. It is known to be much faster than Physics informed neural networks. However, classical ELM is still computationally expensive when a high level of representation is desired in the solution as this requires solving a large least squares system. In this paper, we propose a nonoverlapping domain decomposition method (DDM) for ELMs that not only reduces the training time of ELMs, but is also suitable for parallel computation. In numerical analysis, DDMs have been widely studied to reduce the time to obtain finite element solutions for elliptic PDEs through parallel computation. Among these approaches, nonoverlapping DDMs are attracting the most attention. Motivated by these methods, we introduce local neural networks, which are valid only at corresponding subdomains, and an auxiliary variable at the interface. We construct a system on the variable and the parameters of local neural networks. A Schur complement system on the interface can be derived by eliminating the parameters of the output layer. The auxiliary variable is then directly obtained by solving the reduced system after which the parameters for each local neural network are solved in parallel. A method for initializing the hidden layer parameters suitable for high approximation quality in large systems is also proposed. Numerical results that verify the acceleration performance of the proposed method with respect to the number of subdomains are presented.
翻译:极限学习机(ELM)是一种使用单隐藏层前馈神经网络求解偏微分方程(PDE)的方法。该方法将隐藏层的权重/偏置系数预设为随机值并在整个计算过程中保持固定,同时采用线性最小二乘法训练神经网络输出层的参数。已知该方法比物理信息神经网络快得多。然而,当需要在解中获得高精度表示时,经典ELM的计算成本仍然很高,因为这需要求解大型最小二乘系统。本文提出一种用于ELM的非重叠区域分解方法(DDM),该方法不仅减少了ELM的训练时间,而且适用于并行计算。在数值分析中,DDM已被广泛研究,以通过并行计算缩短获取椭圆PDE有限元解的时间。其中,非重叠DDM最受关注。受这些方法启发,我们引入了仅在相应子域有效的局部神经网络以及界面上的辅助变量。我们构建了关于该变量与局部神经网络参数的系统。通过消除输出层参数,可以推导出界面上的Schur补系统。随后通过求解简化系统直接获得辅助变量,之后并行求解各局部神经网络的参数。本文还提出了一种适用于大型系统中实现高逼近质量的隐藏层参数初始化方法。数值结果验证了所提方法相对于子域数量的加速性能。