Physics-Informed Neural Networks (PINNs) have proven effective in solving partial differential equations (PDEs), especially when some data are available by seamlessly blending data and physics. However, extending PINNs to high-dimensional and even high-order PDEs encounters significant challenges due to the computational cost associated with automatic differentiation in the residual loss. Herein, we address the limitations of PINNs in handling high-dimensional and high-order PDEs by introducing Hutchinson Trace Estimation (HTE). Starting with the second-order high-dimensional PDEs ubiquitous in scientific computing, HTE transforms the calculation of the entire Hessian matrix into a Hessian vector product (HVP). This approach alleviates the computational bottleneck via Taylor-mode automatic differentiation and significantly reduces memory consumption from the Hessian matrix to HVP. We further showcase HTE's convergence to the original PINN loss and its unbiased behavior under specific conditions. Comparisons with Stochastic Dimension Gradient Descent (SDGD) highlight the distinct advantages of HTE, particularly in scenarios with significant variance among dimensions. We further extend HTE to higher-order and higher-dimensional PDEs, specifically addressing the biharmonic equation. By employing tensor-vector products (TVP), HTE efficiently computes the colossal tensor associated with the fourth-order high-dimensional biharmonic equation, saving memory and enabling rapid computation. The effectiveness of HTE is illustrated through experimental setups, demonstrating comparable convergence rates with SDGD under memory and speed constraints. Additionally, HTE proves valuable in accelerating the Gradient-Enhanced PINN (gPINN) version as well as the Biharmonic equation. Overall, HTE opens up a new capability in scientific machine learning for tackling high-order and high-dimensional PDEs.
翻译:物理信息神经网络(PINNs)通过无缝融合数据与物理机制,在求解偏微分方程(PDEs)方面展现出显著优势,尤其在部分数据可获的情况下。然而,将PINNs扩展至高维乃至高阶PDEs时,因残差损失中自动微分带来的计算代价面临重大挑战。本文通过引入自适应迹估计(HTE)方法,攻克了PINNs在处理高维高阶PDEs时的局限性。针对科学计算中普遍存在的二阶高维PDEs,HTE将完整的Hessian矩阵计算转化为Hessian向量积(HVP),通过泰勒模式自动微分缓解计算瓶颈,并将内存消耗从Hessian矩阵规模骤降至HVP量级。我们进一步论证了HTE在特定条件下收敛于原始PINN损失函数的特性及其无偏性。与随机维度梯度下降(SDGD)方法的对比实验揭示了HTE的独特优势,尤其在维度间方差显著的场景中表现突出。我们还将HTE扩展至更高阶高维PDEs,重点解决了双调和方程的求解问题。通过张量-向量积(TVP)技术,HTE高效计算了与四阶高维双调和方程相关的庞大规模张量,在节省内存的同时实现快速计算。实验验证了HTE的有效性:在内存与速度双重约束下,其收敛速度与SDGD方法相当。此外,HTE在加速梯度增强型PINN(gPINN)版本及双调和方程求解中同样展现价值。总体而言,HTE为科学机器学习领域攻克高阶高维PDEs开辟了新路径。