In recent years, neural networks have achieved remarkable progress in various fields and have also drawn much attention in applying them on scientific problems. A line of methods involving neural networks for solving partial differential equations (PDEs), such as Physics-Informed Neural Networks (PINNs) and the Deep Ritz Method (DRM), has emerged. Although these methods outperform classical numerical methods in certain cases, the optimization problems involving neural networks are typically non-convex and non-smooth, which can result in unsatisfactory solutions for PDEs. In contrast to deterministic neural networks, the hidden weights of random neural networks are sampled from some prior distribution and only the output weights participate in training. This makes training much simpler, but it remains unclear how to select the prior distribution. In this paper, we focus on Barron type functions and approximate them under Sobolev norms by random neural networks with clear prior distribution. In addition to the approximation error, we also derive bounds for the optimization and generalization errors of random neural networks for solving PDEs when the solutions are Barron type functions.
翻译:近年来,神经网络在各个领域取得了显著进展,其在科学问题求解中的应用也备受关注。一系列基于神经网络求解偏微分方程的方法应运而生,例如物理信息神经网络(PINNs)和深度Ritz方法(DRM)。尽管这些方法在某些情况下优于经典数值方法,但涉及神经网络的优化问题通常是非凸且非光滑的,可能导致对偏微分方程求解的不理想结果。与确定性神经网络相比,随机神经网络的隐藏权重从某个先验分布中采样,仅输出权重参与训练。这使得训练过程大为简化,但如何选择先验分布仍不明确。本文聚焦于Barron类函数,在Sobolev范数下通过具有明确先验分布的随机神经网络对其进行逼近。除逼近误差外,我们还推导了当偏微分方程解为Barron类函数时,随机神经网络求解偏微分方程的优化误差与泛化误差上界。