Neural differential equations offer a powerful approach for learning dynamics from data. However, they do not impose known constraints that should be obeyed by the learned model. It is well-known that enforcing constraints in surrogate models can enhance their generalizability and numerical stability. In this paper, we introduce projected neural differential equations (PNDEs), a new method for constraining neural differential equations based on projection of the learned vector field to the tangent space of the constraint manifold. In tests on several challenging examples, including chaotic dynamical systems and state-of-the-art power grid models, PNDEs outperform existing methods while requiring fewer hyperparameters. The proposed approach demonstrates significant potential for enhancing the modeling of constrained dynamical systems, particularly in complex domains where accuracy and reliability are essential.
翻译:神经微分方程为从数据中学习动力学提供了强大的方法。然而,它们并未强制要求学习模型必须遵守已知的约束条件。众所周知,在代理模型中强制执行约束可以增强其泛化能力和数值稳定性。本文提出投影神经微分方程(PNDE),这是一种基于将学习向量场投影到约束流形切空间的新方法,用于约束神经微分方程。在包括混沌动力系统和最先进电网模型在内的多个挑战性示例测试中,PNDE 在需要更少超参数的同时,性能优于现有方法。所提出的方法在增强约束动力系统建模方面展现出显著潜力,特别是在对精度和可靠性要求极高的复杂领域中。