In deep learning, the mean of a chosen error metric, such as squared or absolute error, is commonly used as a loss function. While effective in reducing the average error, this approach often fails to address localized outliers, leading to significant inaccuracies in regions with sharp gradients or discontinuities. This issue is particularly evident in physics-informed neural networks (PINNs), where such localized errors are expected and affect the overall solution. To overcome this limitation, we propose a novel loss function that combines the mean and the standard deviation of the chosen error metric. By minimizing this combined loss function, the method ensures a more uniform error distribution and reduces the impact of localized high-error regions. The proposed loss function was tested on three problems: Burger's equation, 2D linear elastic solid mechanics, and 2D steady Navier-Stokes, demonstrating improved solution quality and lower maximum errors compared to the standard mean-based loss, using the same number of iterations and weight initialization.
翻译:在深度学习中,通常将选定误差度量(如平方误差或绝对误差)的均值作为损失函数。虽然这种方法能有效降低平均误差,但往往无法处理局部异常值,导致在梯度急剧变化或不连续区域出现显著误差。这一问题在物理信息神经网络(PINNs)中尤为明显,因为此类局部误差是可预期的,且会影响整体求解精度。为克服这一局限,我们提出了一种新颖的损失函数,该函数结合了选定误差度量的均值与标准差。通过最小化该组合损失函数,本方法能确保误差分布更均匀,并降低局部高误差区域的影响。所提出的损失函数在三个问题上进行了测试:伯格斯方程、二维线性弹性固体力学和二维稳态纳维-斯托克斯方程。实验表明,在相同迭代次数和权重初始化条件下,与标准的基于均值的损失函数相比,新方法能提升求解质量并降低最大误差。