Fractional and tempered fractional partial differential equations (PDEs) are effective models of long-range interactions, anomalous diffusion, and non-local effects. Traditional numerical methods for these problems are mesh-based, thus struggling with the curse of dimensionality (CoD). Physics-informed neural networks (PINNs) offer a promising solution due to their universal approximation, generalization ability, and mesh-free training. In principle, Monte Carlo fractional PINN (MC-fPINN) estimates fractional derivatives using Monte Carlo methods and thus could lift CoD. However, this may cause significant variance and errors, hence affecting convergence; in addition, MC-fPINN is sensitive to hyperparameters. In general, numerical methods and specifically PINNs for tempered fractional PDEs are under-developed. Herein, we extend MC-fPINN to tempered fractional PDEs to address these issues, resulting in the Monte Carlo tempered fractional PINN (MC-tfPINN). To reduce possible high variance and errors from Monte Carlo sampling, we replace the one-dimensional (1D) Monte Carlo with 1D Gaussian quadrature, applicable to both MC-fPINN and MC-tfPINN. We validate our methods on various forward and inverse problems of fractional and tempered fractional PDEs, scaling up to 100,000 dimensions. Our improved MC-fPINN/MC-tfPINN using quadrature consistently outperforms the original versions in accuracy and convergence speed in very high dimensions.
翻译:分数阶与缓变分数阶偏微分方程是描述长程相互作用、反常扩散及非局部效应的有效模型。针对此类问题的传统数值方法基于网格划分,因而难以应对维度灾难。物理信息神经网络凭借其通用逼近能力、泛化性能及无网格训练特性,为此提供了有前景的解决方案。理论上,蒙特卡洛分数阶物理信息神经网络通过蒙特卡洛方法估计分数阶导数,有望克服维度灾难。然而,该方法可能产生显著方差与误差,进而影响收敛性;此外,其对超参数较为敏感。总体而言,针对缓变分数阶偏微分方程的数值方法,特别是物理信息神经网络方法,尚处于发展阶段。为此,我们将蒙特卡洛分数阶物理信息神经网络扩展至缓变分数阶偏微分方程,构建了蒙特卡洛缓变分数阶物理信息神经网络。为降低蒙特卡洛采样可能带来的高方差与误差,我们将一维蒙特卡洛方法替换为一维高斯求积法,该方法同时适用于蒙特卡洛分数阶与缓变分数阶物理信息神经网络。我们在分数阶与缓变分数阶偏微分方程的多类正反问题上验证了所提方法,最高维度可达100,000维。实验表明,采用求积法改进的蒙特卡洛分数阶/缓变分数阶物理信息神经网络在超高维场景下,其精度与收敛速度均持续优于原始版本。