Neural ordinary differential equations (Neural ODEs) is a class of machine learning models that approximate the time derivative of hidden states using a neural network. They are powerful tools for modeling continuous-time dynamical systems, enabling the analysis and prediction of complex temporal behaviors. However, how to improve the model's stability and physical interpretability remains a challenge. This paper introduces new conservation relations in Neural ODEs using Lie symmetries in both the hidden state dynamics and the back propagation dynamics. These conservation laws are then incorporated into the loss function as additional regularization terms, potentially enhancing the physical interpretability and generalizability of the model. To illustrate this method, the paper derives Lie symmetries and conservation laws in a simple Neural ODE designed to monitor charged particles in a sinusoidal electric field. New loss functions are constructed from these conservation relations, demonstrating the applicability symmetry-regularized Neural ODE in typical modeling tasks, such as data-driven discovery of dynamical systems.
翻译:神经常微分方程(Neural ODEs)是一类利用神经网络近似隐藏状态时间导数的机器学习模型。它们是建模连续时间动力系统的强大工具,能够分析和预测复杂的时间行为。然而,如何提高模型的稳定性和物理可解释性仍然是一个挑战。本文利用隐藏状态动力学和反向传播动力学中的李对称性,在神经常微分方程中引入了新的守恒关系。这些守恒定律随后作为额外的正则化项被纳入损失函数,有望增强模型的物理可解释性和泛化能力。为了阐明该方法,本文在一个设计用于监测正弦电场中带电粒子的简单神经常微分方程中推导了李对称性和守恒定律。基于这些守恒关系构建了新的损失函数,展示了对称正则化神经常微分方程在典型建模任务(如数据驱动的动力系统发现)中的适用性。