Wave propagation problems are typically formulated as partial differential equations (PDEs) on unbounded domains to be solved. The classical approach to solving such problems involves truncating them to problems on bounded domains by designing the artificial boundary conditions or perfectly matched layers, which typically require significant effort, and the presence of nonlinearity in the equation makes such designs even more challenging. Emerging deep learning-based methods for solving PDEs, with the physics-informed neural networks (PINNs) method as a representative, still face significant challenges when directly used to solve PDEs on unbounded domains. Calculations performed in a bounded domain of interest without imposing boundary constraints can lead to a lack of unique solutions thus causing the failure of PINNs. In light of this, this paper proposes a novel and effective operator learning-based method for solving PDEs on unbounded domains. The key idea behind this method is to generate high-quality training data. Specifically, we construct a family of approximate analytical solutions to the target PDE based on its initial condition and source term. Then, using these constructed data comprising exact solutions, initial conditions, and source terms, we train an operator learning model called MIONet, which is capable of handling multiple inputs, to learn the mapping from the initial condition and source term to the PDE solution on a bounded domain of interest. Finally, we utilize the generalization ability of this model to predict the solution of the target PDE. The effectiveness of this method is exemplified by solving the wave equation and the Schrodinger equation defined on unbounded domains. More importantly, the proposed method can deal with nonlinear problems, which has been demonstrated by solving Burger's equation and Korteweg-de Vries (KdV) equation.
翻译:波动问题通常被表述为需要求解的无界域偏微分方程。求解此类问题的经典方法是通过设计人工边界条件或完美匹配层将其截断为有界域问题,但这通常需要大量工作,而方程中的非线性会使此类设计更具挑战性。新兴的基于深度学习的偏微分方程求解方法,以物理信息神经网络为代表,在直接用于求解无界域偏微分方程时仍面临重大挑战。若在没有施加边界约束的有界感兴趣域内进行计算,可能导致解不唯一,从而使物理信息神经网络失效。鉴于此,本文提出一种新颖且有效的基于算子学习的方法来求解无界域偏微分方程。该方法的核心思想是生成高质量的训练数据。具体而言,我们基于目标偏微分方程的初始条件和源项构造一族近似解析解。然后,利用这些包含精确解、初始条件和源项的构造数据,训练一个名为MIONet的多输入算子学习模型,以学习从初始条件和源项到感兴趣有界域内偏微分方程解的映射。最后,我们利用该模型的泛化能力预测目标偏微分方程的解。通过求解定义在无界域上的波动方程和薛定谔方程,证明了该方法的有效性。更重要的是,所提方法能够处理非线性问题,并通过求解伯格斯方程和科特韦格-德弗里斯方程得到了验证。