Deep learning-based hybrid iterative methods (DL-HIMs) integrate classical numerical solvers with neural operators, utilizing their complementary spectral biases to accelerate convergence. Despite this promise, many DL-HIMs stagnate at false fixed points where neural updates vanish while the physical residual remains large, raising questions about reliability in scientific computing. In this paper, we provide evidence that performance is highly sensitive to training paradigms and update strategies, even when the neural architecture is fixed. Through a detailed study of a DeepONet-based hybrid iterative numerical transferable solver (HINTS) and an FFT-based Fourier neural solver (FNS), we show that significant physical residuals can persist when training objectives are not aligned with solver dynamics and problem physics. We further examine Anderson acceleration (AA) and demonstrate that its classical form is ill-suited for nonlinear neural operators. To overcome this, we introduce physics-aware Anderson acceleration (PA-AA), which minimizes the physical residual rather than the fixed-point update. Numerical experiments confirm that PA-AA restores reliable convergence in substantially fewer iterations. These findings provide a concrete answer to ongoing controversies surrounding AI-based PDE solvers: reliability hinges not only on architectures but on physically informed training and iteration design.
翻译:基于深度学习的混合迭代方法(DL-HIMs)将经典数值求解器与神经算子相结合,利用其互补的频谱偏置以加速收敛。尽管前景广阔,许多DL-HIMs会在虚假不动点处停滞,此时神经更新消失而物理残差依然很大,这引发了其在科学计算中可靠性的质疑。本文通过证据表明,即使神经架构固定,性能仍对训练范式与更新策略高度敏感。通过对基于DeepONet的混合迭代可迁移数值求解器(HINTS)和基于快速傅里叶变换的傅里叶神经求解器(FNS)的详细研究,我们发现当训练目标与求解器动力学及问题物理特性不一致时,显著的物理残差可能持续存在。我们进一步考察了安德森加速法(AA),并证明其经典形式不适用于非线性神经算子。为克服此问题,我们提出了物理感知的安德森加速法(PA-AA),该方法最小化物理残差而非不动点更新。数值实验证实,PA-AA能在显著更少的迭代次数内恢复可靠的收敛性。这些发现为围绕基于人工智能的偏微分方程求解器的持续争议提供了具体答案:可靠性不仅取决于架构,更依赖于基于物理信息的训练与迭代设计。