To overcome these obstacles and improve computational accuracy and efficiency, this paper presents the Randomized Radial Basis Function Neural Network (RRNN), an innovative approach explicitly crafted for solving multiscale elliptic equations. The RRNN method commences by decomposing the computational domain into non-overlapping subdomains. Within each subdomain, the solution to the localized subproblem is approximated by a randomized radial basis function neural network with a Gaussian kernel. This network is distinguished by the random assignment of width and center coefficients for its activation functions, thereby rendering the training process focused solely on determining the weight coefficients of the output layer. For each subproblem, similar to the Petrov-Galerkin finite element method, a linear system will be formulated on the foundation of a weak formulation. Subsequently, a selection of collocation points is stochastically sampled at the boundaries of the subdomain, ensuring satisfying $C^0$ and $C^1$ continuity and boundary conditions to couple these localized solutions. The network is ultimately trained using the least squares method to ascertain the output layer weights. To validate the RRNN method's effectiveness, an extensive array of numerical experiments has been executed and the results demonstrate that the proposed method can improve the accuracy and efficiency well.
翻译:为克服这些障碍并提升计算精度与效率,本文提出随机径向基函数神经网络(RRNN),这是一种专为求解多尺度椭圆方程而设计的创新方法。RRNN方法首先将计算域分解为互不重叠的子域。在每个子域内,局部子问题的解通过采用高斯核的随机径向基函数神经网络进行逼近。该网络的显著特征在于其激活函数的宽度系数与中心系数均被随机赋值,从而使训练过程仅需专注于确定输出层的权重系数。对于每个子问题,类似于Petrov-Galerkin有限元方法,将在弱形式基础上构建线性系统。随后,在子域边界处随机选取配置点,确保满足$C^0$与$C^1$连续性及边界条件,以实现局部解的耦合。最终采用最小二乘法训练网络以确定输出层权重。为验证RRNN方法的有效性,我们开展了大量数值实验,结果表明所提方法能显著提升计算精度与效率。