The traditional limitations of neural networks in reliably generalizing beyond the convex hulls of their training data present a significant problem for computational physics, in which one often wishes to solve PDEs in regimes far beyond anything which can be experimentally or analytically validated. In this paper, we show how it is possible to circumvent these limitations by constructing formally-verified neural network solvers for PDEs, with rigorous convergence, stability, and conservation properties, whose correctness can therefore be guaranteed even in extrapolatory regimes. By using the method of characteristics to predict the analytical properties of PDE solutions a priori (even in regions arbitrarily far from the training domain), we show how it is possible to construct rigorous extrapolatory bounds on the worst-case L^inf errors of shallow neural network approximations. Then, by decomposing PDE solutions into compositions of simpler functions, we show how it is possible to compose these shallow neural networks together to form deep architectures, based on ideas from compositional deep learning, in which the large L^inf errors in the approximations have been suppressed. The resulting framework, called BEACONS (Bounded-Error, Algebraically-COmposable Neural Solvers), comprises both an automatic code-generator for the neural solvers themselves, as well as a bespoke automated theorem-proving system for producing machine-checkable certificates of correctness. We apply the framework to a variety of linear and non-linear PDEs, including the linear advection and inviscid Burgers' equations, as well as the full compressible Euler equations, in both 1D and 2D, and illustrate how BEACONS architectures are able to extrapolate solutions far beyond the training data in a reliable and bounded way. Various advantages of the approach over the classical PINN approach are discussed.
翻译:神经网络在训练数据凸包之外可靠泛化的传统局限性对计算物理学构成了一个重大问题,因为在计算物理中,人们通常希望在远超任何实验或解析验证范围的区域求解偏微分方程。本文展示了如何通过构建形式验证的偏微分方程神经网络求解器来规避这些限制,这些求解器具有严格的收敛性、稳定性和守恒性质,因此即使在推断区域也能保证其正确性。通过使用特征线法先验地预测偏微分方程解的解析性质(即使在远离训练区域的任意区域),我们展示了如何为浅层神经网络逼近的最坏情况L^∞误差构建严格的推断边界。然后,通过将偏微分方程解分解为更简单函数的组合,我们展示了如何基于组合式深度学习的思想,将这些浅层神经网络组合成深度架构,从而抑制逼近中的大L^∞误差。由此产生的框架称为BEACONS(有界误差代数可组合神经求解器),既包含神经网络求解器本身的自动代码生成器,也包含一个用于生成机器可验证正确性证书的定制自动化定理证明系统。我们将该框架应用于各种线性和非线性偏微分方程,包括线性平流方程、无粘性Burgers方程以及完整的可压缩Euler方程(涵盖一维和二维情况),并说明BEACONS架构如何能够以可靠且有界的方式将解推断到远超训练数据的范围。文中还讨论了该方法相较于经典PINN方法的诸多优势。