The numerical solution of differential equations using neural networks has become a central topic in scientific computing, with Physics-Informed Neural Networks (PINNs) emerging as a powerful paradigm for both forward and inverse problems. However, unlike classical numerical methods that offer established convergence guarantees, neural network-based approximations typically lack rigorous error bounds. Furthermore, the non-deterministic nature of their optimization makes it difficult to mathematically certify their accuracy. To address these challenges, we propose a "Learn and Verify" framework that provides computable, mathematically rigorous error bounds for the solutions of differential equations. By combining a novel Doubly Smoothed Maximum (DSM) loss for training with interval arithmetic for verification, we compute rigorous a posteriori error bounds as machine-verifiable proofs. Numerical experiments on nonlinear Ordinary Differential Equations (ODEs), including problems with time-varying coefficients and finite-time blow-up, demonstrate that the proposed framework successfully constructs rigorous enclosures of the true solutions, establishing a foundation for trustworthy scientific machine learning.
翻译:利用神经网络求解微分方程的数值方法已成为科学计算的核心课题,物理信息神经网络(PINNs)作为处理正问题和反问题的强大范式应运而生。然而,与具备成熟收敛保证的经典数值方法不同,基于神经网络的近似方法通常缺乏严格的误差界。此外,其优化过程的非确定性使得在数学上难以严格证明其精度。为应对这些挑战,我们提出一种"学习与验证"框架,为微分方程的解提供可计算的、数学严格的误差界。通过将训练阶段的新型双平滑最大值(DSM)损失函数与验证阶段的区间算术相结合,我们计算出可作为机器可验证证明的严格后验误差界。在非线性常微分方程(包括含时变系数和有限时间爆破问题)上的数值实验表明,所提框架成功构建了真实解的严格包络,为可信赖的科学机器学习奠定了理论基础。