Physics-informed deep learning often faces optimization challenges due to the complexity of solving partial differential equations (PDEs), which involve exploring large solution spaces, require numerous iterations, and can lead to unstable training. These challenges arise particularly from the ill-conditioning of the optimization problem caused by the differential terms in the loss function. To address these issues, we propose learning a solver, i.e., solving PDEs using a physics-informed iterative algorithm trained on data. Our method learns to condition a gradient descent algorithm that automatically adapts to each PDE instance, significantly accelerating and stabilizing the optimization process and enabling faster convergence of physics-aware models. Furthermore, while traditional physics-informed methods solve for a single PDE instance, our approach extends to parametric PDEs. Specifically, we integrate the physical loss gradient with PDE parameters, allowing our method to solve over a distribution of PDE parameters, including coefficients, initial conditions, and boundary conditions. We demonstrate the effectiveness of our approach through empirical experiments on multiple datasets, comparing both training and test-time optimization performance. The code is available at https://github.com/2ailesB/neural-parametric-solver.
翻译:物理信息深度学习在求解偏微分方程(PDEs)时,常因求解空间庞大、迭代次数繁多及训练过程不稳定而面临优化挑战。这些挑战尤其源于损失函数中微分项导致的优化问题病态性。为解决这些问题,我们提出学习一个求解器,即利用基于数据训练的物理信息迭代算法来求解偏微分方程。我们的方法学习调节一个梯度下降算法,使其能自动适应每个偏微分方程实例,从而显著加速并稳定优化过程,实现物理感知模型的更快收敛。此外,传统物理信息方法仅针对单个偏微分方程实例求解,而我们的方法可扩展至参数化偏微分方程。具体而言,我们将物理损失梯度与偏微分方程参数(包括系数、初始条件和边界条件)相结合,使方法能够求解参数分布上的偏微分方程。我们通过在多个数据集上的实证实验,比较训练阶段和测试阶段的优化性能,验证了所提方法的有效性。代码发布于 https://github.com/2ailesB/neural-parametric-solver。