Approximate solutions of partial differential equations (PDEs) obtained by neural networks are highly affected by hyper parameter settings. For instance, the model training strongly depends on loss function design, including the choice of weight factors for different terms in the loss function, and the sampling set related to numerical integration; other hyper parameters, like the network architecture and the optimizer settings, also impact the model performance. On the other hand, suitable hyper parameter settings are known to be different for different model problems and currently no universal rule for the choice of hyper parameters is known. In this paper, for second order elliptic model problems, various hyper parameter settings are tested numerically to provide a practical guide for efficient and accurate neural network approximation. While a full study of all possible hyper parameter settings is not possible, we focus on studying the formulation of the PDE loss as well as the incorporation of the boundary conditions, the choice of collocation points associated with numerical integration schemes, and various approaches for dealing with loss imbalances will be extensively studied on various model problems; in addition to various Poisson model problems, also a nonlinear and an eigenvalue problem are considered.
翻译:神经网络获得的偏微分方程近似解受超参数设置影响显著。例如,模型训练高度依赖于损失函数设计,包括损失函数中不同项权重因子的选择,以及与数值积分相关的采样集设定;其他超参数,如网络架构和优化器设置,同样影响模型性能。另一方面,已知合适的超参数设置随模型问题而异,目前尚无通用的超参数选择准则。本文针对二阶椭圆模型问题,通过数值实验测试多种超参数设置,为高效精确的神经网络近似提供实用指导。尽管无法穷尽所有可能的超参数设置,我们重点研究PDE损失函数的构建方式及边界条件的融入方法、与数值积分方案相关的配置点选择策略,并在多种模型问题上深入探讨处理损失失衡的各种方法;除各类泊松模型问题外,还考虑了非线性问题与特征值问题。