Recently, designing neural solvers for large-scale linear systems of equations has emerged as a promising approach in scientific and engineering computing. This paper first introduce the Richardson(m) neural solver by employing a meta network to predict the weights of the long-step Richardson iterative method. Next, by incorporating momentum and preconditioning techniques, we further enhance convergence. Numerical experiments on anisotropic second-order elliptic equations demonstrate that these new solvers achieve faster convergence and lower computational complexity compared to both the Chebyshev iterative method with optimal weights and the Chebyshev semi-iteration method. To address the strong dependence of the aforementioned single-level neural solvers on PDE parameters and grid size, we integrate them with two multilevel neural solvers developed in recent years. Using alternating optimization techniques, we construct Richardson(m)-FNS for anisotropic equations and NAG-Richardson(m)-WANS for the Helmholtz equation. Numerical experiments show that these two multilevel neural solvers effectively overcome the drawback of single-level methods, providing better robustness and computational efficiency.
翻译:近年来,设计用于大规模线性方程组的神经求解器已成为科学与工程计算中一种前景广阔的方法。本文首先通过采用元网络预测长步Richardson迭代法的权重,提出了Richardson(m)神经求解器。随后,通过引入动量和预处理技术,我们进一步提升了收敛性能。各向异性二阶椭圆方程的数值实验表明,与采用最优权重的Chebyshev迭代法及Chebyshev半迭代法相比,这些新型求解器实现了更快的收敛速度和更低的计算复杂度。为克服上述单层神经求解器对PDE参数和网格尺寸的强依赖性,我们将其与近年来发展的两种多层神经求解器相结合。利用交替优化技术,我们构建了针对各向异性方程的Richardson(m)-FNS以及针对Helmholtz方程的NAG-Richardson(m)-WANS。数值实验表明,这两种多层神经求解器有效克服了单层方法的缺陷,提供了更好的鲁棒性和计算效率。