Neural Ordinary Differential Equations (NODEs) often struggle to adapt to new dynamic behaviors caused by parameter changes in the underlying system, even when these dynamics are similar to previously observed behaviors. This problem becomes more challenging when the changing parameters are unobserved, meaning their value or influence cannot be directly measured when collecting data. To address this issue, we introduce Neural Context Flow (NCF), a robust and interpretable Meta-Learning framework that includes uncertainty estimation. NCF uses higher-order Taylor expansion to enable contextual self-modulation, allowing context vectors to influence dynamics from other domains while also modulating themselves. After establishing convergence guarantees, we empirically test NCF and compare it to related adaptation methods. Our results show that NCF achieves state-of-the-art Out-of-Distribution performance on 5 out of 6 linear and non-linear benchmark problems. Through extensive experiments, we explore the flexible model architecture of NCF and the encoded representations within the learned context vectors. Our findings highlight the potential implications of NCF for foundational models in the physical sciences, offering a promising approach to improving the adaptability and generalization of NODEs in various scientific applications. Our code is openly available at \url{https://github.com/ddrous/ncflow}.
翻译:神经常微分方程(NODEs)通常难以适应由底层系统参数变化引起的新动态行为,即使这些动态与先前观察到的行为相似。当变化的参数未被观测时——即在收集数据时无法直接测量其值或影响——这一问题变得更具挑战性。为解决此问题,我们引入了神经上下文流(NCF),这是一个包含不确定性估计的鲁棒且可解释的元学习框架。NCF利用高阶泰勒展开实现上下文自调制,使得上下文向量既能影响来自其他领域的动态,也能对自身进行调制。在建立收敛性保证后,我们通过实验测试NCF,并将其与相关的适应方法进行比较。结果表明,在6个线性和非线性基准问题中,NCF在5个问题上实现了最先进的分布外性能。通过大量实验,我们探索了NCF的灵活模型架构以及学习到的上下文向量中编码的表示。我们的发现凸显了NCF对物理科学基础模型的潜在影响,为提升NODEs在各种科学应用中的适应性和泛化能力提供了一种有前景的途径。我们的代码已公开于 \url{https://github.com/ddrous/ncflow}。