In recent years, the researches about solving partial differential equations (PDEs) based on artificial neural network have attracted considerable attention. In these researches, the neural network models are usually designed depend on human experience or trial and error. Despite the emergence of several model searching methods, these methods primarily concentrate on optimizing the hyperparameters of fully connected neural network model based on the framework of physics-informed neural networks (PINNs), and the corresponding search spaces are relatively restricted, thereby limiting the exploration of superior models. This article proposes an evolutionary computation method aimed at discovering the PINNs model with higher approximation accuracy and faster convergence rate. In addition to searching the numbers of layers and neurons per hidden layer, this method concurrently explores the optimal shortcut connections between the layers and the novel parametric activation functions expressed by the binary trees. In evolution, the strategy about dynamic population size and training epochs (DPSTE) is adopted, which significantly increases the number of models to be explored and facilitates the discovery of models with fast convergence rate. In experiments, the performance of different models that are searched through Bayesian optimization, random search and evolution is compared in solving Klein-Gordon, Burgers, and Lam\'e equations. The experimental results affirm that the models discovered by the proposed evolutionary computation method generally exhibit superior approximation accuracy and convergence rate, and these models also show commendable generalization performance with respect to the source term, initial and boundary conditions, equation coefficient and computational domain. The corresponding code is available at https://github.com/MathBon/Discover-PINNs-Model.
翻译:近年来,基于人工神经网络求解偏微分方程的研究受到广泛关注。这类研究中,神经网络模型通常依赖人工经验或试错法设计。尽管已有若干模型搜索方法出现,但这些方法主要聚焦于优化基于物理信息神经网络框架的全连接网络超参数,且搜索空间相对受限,限制了更优模型的探索空间。本文提出一种进化计算方法,旨在发现具有更高逼近精度和更快收敛速度的物理信息神经网络模型。除搜索隐藏层层数与神经元数量外,该方法同时探索层间最优捷径连接及由二叉树表达的新型参数化激活函数。进化过程中采用动态种群规模与训练轮次策略,显著增加待探索模型数量,促进快速收敛模型的发现。实验通过对比贝叶斯优化、随机搜索与进化方法在求解Klein-Gordon方程、Burgers方程及Lamé方程时获得的模型性能,结果表明本文提出的进化计算方法所发现的模型普遍具有更优的逼近精度与收敛速度,且在源项、初边值条件、方程系数及计算域方面均展现出良好的泛化性能。相关代码见https://github.com/MathBon/Discover-PINNs-Model。