We propose a new neural network based method for solving inverse problems for partial differential equations (PDEs) by formulating the PDE inverse problem as a bilevel optimization problem. At the upper level, we minimize the data loss with respect to the PDE parameters. At the lower level, we train a neural network to locally approximate the PDE solution operator in the neighborhood of a given set of PDE parameters, which enables an accurate approximation of the descent direction for the upper level optimization problem. The lower level loss function includes the L2 norms of both the residual and its derivative with respect to the PDE parameters. We apply gradient descent simultaneously on both the upper and lower level optimization problems, leading to an effective and fast algorithm. The method, which we refer to as BiLO (Bilevel Local Operator learning), is also able to efficiently infer unknown functions in the PDEs through the introduction of an auxiliary variable. We provide a theoretical analysis that justifies our approach. Through extensive experiments over multiple PDE systems, we demonstrate that our method enforces strong PDE constraints, is robust to sparse and noisy data, and eliminates the need to balance the residual and the data loss, which is inherent to the soft PDE constraints in many existing methods.
翻译:我们提出了一种基于神经网络的新方法,用于求解偏微分方程(PDE)反问题,该方法将PDE反问题表述为一个双层优化问题。在上层,我们针对PDE参数最小化数据损失。在下层,我们训练一个神经网络,以在给定PDE参数集的邻域内局部逼近PDE解算子,从而能够准确逼近上层优化问题的下降方向。下层损失函数包括残差及其关于PDE参数的导数的L2范数。我们同时对上层和下层优化问题应用梯度下降,从而得到一个高效快速的算法。该方法(我们称之为BiLO,即双层局部算子学习)还能够通过引入辅助变量,高效推断PDE中的未知函数。我们提供了理论分析以证明该方法的合理性。通过对多个PDE系统的大量实验,我们证明了我们的方法能够强制实施强PDE约束,对稀疏和噪声数据具有鲁棒性,并且无需平衡残差与数据损失(这是许多现有方法中软PDE约束固有的需求)。