In recent years, nonlinear dynamic system identification using artificial neural networks has garnered attention due to its broad potential applications across science and engineering. However, purely data-driven approaches often struggle with extrapolation and may yield physically implausible forecasts. Furthermore, the learned dynamics can exhibit instabilities, making it difficult to apply such models safely and robustly. This article introduces stable port-Hamiltonian neural networks, a machine learning architecture that incorporates physical biases of energy conservation and dissipation while ensuring global Lyapunov stability of the learned dynamics. Through illustrative and real-world examples, we demonstrate that these strong inductive biases facilitate robust learning of stable dynamics from sparse data, while avoiding instability and surpassing purely data-driven approaches in accuracy and physically meaningful generalization. Furthermore, the model's applicability and potential for data-driven surrogate modeling are showcased on multi-physics simulation data.
翻译:近年来,利用人工神经网络进行非线性动态系统辨识因其在科学与工程领域的广泛潜在应用而备受关注。然而,纯数据驱动的方法通常在泛化外推方面存在困难,并可能产生物理上不合理的预测。此外,学习到的动力学行为可能表现出不稳定性,使得此类模型难以安全稳健地应用。本文提出稳定端口-哈密顿神经网络,这是一种融入能量守恒与耗散物理先验的机器学习架构,同时确保学习动力学的全局李雅普诺夫稳定性。通过示例与真实场景案例,我们证明这些强归纳偏置有助于从稀疏数据中稳健地学习稳定动力学,避免不稳定性,并在准确性和物理意义泛化能力上超越纯数据驱动方法。此外,该模型在多物理场仿真数据上的适用性及其在数据驱动代理建模中的潜力也得到了展示。