Physics-Informed Neural Networks (PINNs) have emerged as a powerful tool for solving partial differential equations~(PDEs) in various scientific and engineering domains. However, traditional PINN architectures typically rely on large, fully connected multilayer perceptrons~(MLPs), lacking the sparsity and modularity inherent in many traditional numerical solvers. An unsolved and critical question for PINN is: What is the minimum PINN complexity regarding nodes, layers, and connections needed to provide acceptable performance? To address this question, this study investigates a novel approach by merging established PINN methodologies with brain-inspired neural network techniques. We use Brain-Inspired Modular Training~(BIMT), leveraging concepts such as locality, sparsity, and modularity inspired by the organization of the brain. With brain-inspired PINN, we demonstrate the evolution of PINN architectures from large, fully connected structures to bare-minimum, compact MLP architectures, often consisting of a few neural units! Moreover, using brain-inspired PINN, we showcase the spectral bias phenomenon occurring on the PINN architectures: bare-minimum architectures solving problems with high-frequency components require more neural units than PINN solving low-frequency problems. Finally, we derive basic PINN building blocks through BIMT training on simple problems akin to convolutional and attention modules in deep neural networks, enabling the construction of modular PINN architectures. Our experiments show that brain-inspired PINN training leads to PINN architectures that minimize the computing and memory resources yet provide accurate results.
翻译:物理信息神经网络(Physics-Informed Neural Networks, PINNs)已成为在科学与工程各领域求解偏微分方程(PDEs)的有力工具。然而,传统PINN架构通常依赖大型全连接多层感知机(MLPs),缺乏许多传统数值求解器固有的稀疏性和模块性。PINN面临一个尚未解决的关键问题:在节点、层和连接方面,需要多大程度的PINN复杂度才能提供可接受的性能?为解答此问题,本研究通过融合既有PINN方法与脑启发神经网络技术,探索了一种新型方案。我们采用脑启发模块化训练(Brain-Inspired Modular Training, BIMT),借鉴了受大脑组织结构启发的局部性、稀疏性和模块性等概念。借助脑启发PINN,我们展示了PINN架构从大型全连接结构向极简紧凑MLP架构(常仅由少数神经单元组成)的演化过程。此外,利用脑启发PINN,我们揭示了PINN架构上出现的谱偏差现象:求解含高频成分问题的极简架构比求解低频问题的PINN需要更多神经单元。最后,我们通过类似深度神经网络中卷积模块和注意力模块的简单问题上的BIMT训练,推导出基本PINN构建模块,从而能够构建模块化PINN架构。实验表明,脑启发PINN训练在最小化计算与内存资源的同时,生成了仍能提供精确结果的PINN架构。