Learning dynamical systems through purely data-driven methods is challenging as they do not learn the underlying conservation laws that enable them to correctly generalize. Existing port-Hamiltonian neural network methods have recently been successfully applied for modeling mechanical systems. However, even though these methods are designed on power-balance principles, they usually do not consider power-preserving discretizations and often rely on Runge-Kutta numerical methods. In this work, we propose to use a second-order discrete gradient method embedded in the learning of dynamical systems with port-Hamiltonian neural networks. Numerical results are provided for three systems deliberately selected to span different ranges of dynamical behavior under control: a baseline harmonic oscillator with quadratic energy storage; a Duffing oscillator, with a non-quadratic Hamiltonian offering amplitude-dependent effects; and a self-sustained oscillator, which can stabilize in a controlled limit cycle through the incorporation of a nonlinear dissipation. We show how the use of this discrete gradient method outperforms the performance of a Runge-Kutta method of the same order. Experiments are also carried out to compare two theoretically equivalent port-Hamiltonian systems formulations and to analyze the impact of regularizing the Jacobian of port-Hamiltonian neural networks during training.
翻译:通过纯数据驱动方法学习动力系统具有挑战性,因为它们无法学习使其正确泛化的基本守恒定律。现有的端口哈密顿神经网络方法最近已成功应用于机械系统建模。然而,尽管这些方法基于功率平衡原理设计,它们通常不考虑功率保持离散化,且往往依赖龙格-库塔数值方法。在本工作中,我们提出将二阶离散梯度方法嵌入到使用端口哈密顿神经网络学习动力系统的过程中。数值实验针对三个精心选择的系统展开,这些系统覆盖了受控下不同范围的动态行为:具有二次能量存储的基准谐振子;具有非二次哈密顿量并能产生振幅依赖效应的杜芬振子;以及通过引入非线性耗散可在受控极限环中稳定的自持振荡器。我们证明了使用该离散梯度方法在性能上优于同阶龙格-库塔方法。实验还比较了两种理论等价的端口哈密顿系统表述,并分析了在训练过程中对端口哈密顿神经网络雅可比矩阵进行正则化的影响。