In this paper, we propose a data-driven framework for constructing efficient approximate inverse preconditioners for elliptic partial differential equations (PDEs) by learning the Green's function of the underlying operator with neural networks (NNs). The training process integrates four key components: an adaptive multiscale neural architecture ($\alpha$MSNN) that captures hierarchical features across near-, middle-, and far-field regimes; the use of coarse-grid anchor data to ensure physical identifiability; a multi-$\varepsilon$ staged training protocol that progressively refines the Green's function representation across spatial scales; and an overlapping domain decomposition that enables local adaptation while maintaining global consistency. Once trained, the NN-approximated Green's function is directly compressed into either a hierarchical ($\mathcal{H}$-) matrix or a sparse matrix-using only the mesh geometry and the network output. This geometric construction achieves nearly linear complexity in both setup and application while preserving the spectral properties essential for effective preconditioning. Numerical experiments on challenging elliptic PDEs demonstrate that the resulting preconditioners consistently yield fast convergence and small iteration counts.
翻译:本文提出了一种数据驱动的框架,用于为椭圆型偏微分方程构建高效的近似逆预条件子。该框架通过神经网络学习底层算子的格林函数。训练过程整合了四个关键组成部分:一种自适应多尺度神经架构($\alpha$MSNN),用于捕捉近场、中场和远场区域的分层特征;使用粗网格锚点数据以确保物理可辨识性;一种多$\varepsilon$分阶段训练协议,逐步优化跨空间尺度的格林函数表示;以及一种重叠区域分解方法,在保持全局一致性的同时实现局部适应。训练完成后,仅利用网格几何和网络输出,即可将神经网络近似的格林函数直接压缩为分层($\mathcal{H}$-)矩阵或稀疏矩阵。这种基于几何的构造方法在预条件子的建立和应用阶段均实现了近乎线性的复杂度,同时保留了有效预条件处理所必需的光谱特性。在具有挑战性的椭圆型偏微分方程上的数值实验表明,所得到的预条件子能够持续实现快速收敛和较少的迭代次数。