Physics-Informed Neural Networks (PINNs) are a powerful deep learning method capable of providing solutions and parameter estimations of physical systems. Given the complexity of their neural network structure, the convergence speed is still limited compared to numerical methods, mainly when used in applications that model realistic systems. The network initialization follows a random distribution of the initial weights, as in the case of traditional neural networks, which could lead to severe model convergence bottlenecks. To overcome this problem, we follow current studies that deal with optimal initial weights in traditional neural networks. In this paper, we use a convex optimization model to improve the initialization of the weights in PINNs and accelerate convergence. We investigate two optimization models as a first training step, defined as pre-training, one involving only the boundaries and one including physics. The optimization is focused on the first layer of the neural network part of the PINN model, while the other weights are randomly initialized. We test the methods using a practical application of the heat diffusion equation to model the temperature distribution of power transformers. The PINN model with boundary pre-training is the fastest converging method at the current stage.
翻译:物理信息神经网络(PINNs)是一种强大的深度学习方法,能够为物理系统提供解和参数估计。鉴于其神经网络结构的复杂性,与数值方法相比,其收敛速度仍然有限,尤其是在用于模拟实际系统的应用中。与传统神经网络一样,网络初始化遵循初始权重的随机分布,这可能导致严重的模型收敛瓶颈。为了克服这个问题,我们借鉴了当前处理传统神经网络最优初始权重的研究。本文使用凸优化模型来改进PINNs的权重初始化并加速收敛。我们研究了两种作为首次训练步骤(定义为预训练)的优化模型:一种仅涉及边界条件,另一种包含物理信息。优化集中于PINN模型神经网络部分的第一层权重,而其他权重则随机初始化。我们通过热扩散方程的实际应用(模拟电力变压器温度分布)来测试这些方法。在当前阶段,采用边界预训练的PINN模型是收敛速度最快的方法。