Neural networks that synergistically integrate data and physical laws offer great promise in modeling dynamical systems. However, iterative gradient-based optimization of network parameters is often computationally expensive and suffers from slow convergence. In this work, we present a backpropagation-free algorithm to accelerate the training of neural networks for approximating Hamiltonian systems through data-agnostic and data-driven algorithms. We empirically show that data-driven sampling of the network parameters outperforms data-agnostic sampling or the traditional gradient-based iterative optimization of the network parameters when approximating functions with steep gradients or wide input domains. We demonstrate that our approach is more than 100 times faster with CPUs than the traditionally trained Hamiltonian Neural Networks using gradient-based iterative optimization and is more than four orders of magnitude accurate in chaotic examples, including the H\'enon-Heiles system.
翻译:协同整合数据与物理定律的神经网络为动力学系统建模提供了巨大潜力。然而,基于梯度的网络参数迭代优化通常计算成本高昂且收敛缓慢。本研究提出了一种无需反向传播的算法,通过数据无关与数据驱动算法加速用于逼近哈密顿系统的神经网络训练。经验表明,在逼近具有陡峭梯度或宽输入域的函数时,网络参数的数据驱动采样优于数据无关采样或传统的基于梯度的网络参数迭代优化。我们证明,在CPU上,该方法比使用基于梯度迭代优化的传统哈密顿神经网络训练快100倍以上,并在包括Hénon-Heiles系统在内的混沌示例中达到超过四个数量级的精度。