In this paper, we introduce the Deep Finite Volume Method (DFVM), an innovative deep learning framework tailored for solving high-order (order \(\geq 2\)) partial differential equations (PDEs). Our approach centers on a novel loss function crafted from local conservation laws derived from the original PDE, distinguishing DFVM from traditional deep learning methods. By formulating DFVM in the weak form of the PDE rather than the strong form, we enhance accuracy, particularly beneficial for PDEs with less smooth solutions compared to strong-form-based methods like Physics-Informed Neural Networks (PINNs). A key technique of DFVM lies in its transformation of all second-order or higher derivatives of neural networks into first-order derivatives which can be comupted directly using Automatic Differentiation (AD). This adaptation significantly reduces computational overhead, particularly advantageous for solving high-dimensional PDEs. Numerical experiments demonstrate that DFVM achieves equal or superior solution accuracy compared to existing deep learning methods such as PINN, Deep Ritz Method (DRM), and Weak Adversarial Networks (WAN), while drastically reducing computational costs. Notably, for PDEs with nonsmooth solutions, DFVM yields approximate solutions with relative errors up to two orders of magnitude lower than those obtained by PINN. The implementation of DFVM is available on GitHub at \href{https://github.com/Sysuzqs/DFVM}{https://github.com/Sysuzqs/DFVM}.
翻译:本文提出了一种创新的深度学习框架——深度有限体积法(DFVM),专门用于求解高阶(阶数≥2)偏微分方程(PDEs)。该方法的核心在于构建了一个基于原始PDE局部守恒定律的新型损失函数,这使其区别于传统的深度学习方法。通过将DFVM建立在PDE的弱形式而非强形式之上,我们提高了求解精度,尤其对于解光滑性较差的PDE,该方法相比基于强形式的方法(如物理信息神经网络PINNs)更具优势。DFVM的一项关键技术在于将所有神经网络的二阶及更高阶导数转换为一阶导数,这些一阶导数可直接通过自动微分(AD)计算。这一改进显著降低了计算开销,对于求解高维PDE尤其有利。数值实验表明,与现有的深度学习方法(如PINN、深度Ritz方法(DRM)和弱对抗网络(WAN))相比,DFVM在达到同等或更优求解精度的同时,大幅降低了计算成本。值得注意的是,对于具有非光滑解的PDE,DFVM所得近似解的相对误差比PINN方法低达两个数量级。DFVM的实现代码已在GitHub上开源,地址为 \href{https://github.com/Sysuzqs/DFVM}{https://github.com/Sysuzqs/DFVM}。