Modeling complex dynamical systems under varying conditions is computationally intensive, often rendering high-fidelity simulations intractable. Although reduced-order models (ROMs) offer a promising solution, current methods often struggle with stochastic dynamics and fail to quantify prediction uncertainty, limiting their utility in robust decision-making contexts. To address these challenges, we introduce a data-driven framework for learning continuous-time stochastic ROMs that generalize across parameter spaces and forcing conditions. Our approach, based on amortized stochastic variational inference, leverages a reparametrization trick for Markov Gaussian processes to eliminate the need for computationally expensive forward solvers during training. This enables us to jointly learn a probabilistic autoencoder and stochastic differential equations governing the latent dynamics, at a computational cost that is independent of the dataset size and system stiffness. Additionally, our approach offers the flexibility of incorporating physics-informed priors if available. Numerical studies are presented for three challenging test problems, where we demonstrate excellent generalization to unseen parameter combinations and forcings, and significant efficiency gains compared to existing approaches.
翻译:在变化条件下对复杂动力系统进行建模计算量巨大,常使高保真度仿真难以实现。尽管降阶模型提供了有前景的解决方案,但现有方法往往难以处理随机动力学特性,且无法量化预测不确定性,限制了其在鲁棒决策场景中的应用。为应对这些挑战,我们提出了一种数据驱动框架,用于学习在参数空间和激励条件下具有泛化能力的连续时间随机降阶模型。该方法基于摊销随机变分推断,利用马尔可夫高斯过程的重参数化技巧,消除了训练过程中对计算成本高昂的正向求解器的需求。这使得我们能够以独立于数据集规模和系统刚度的计算成本,联合学习概率自编码器及控制潜在动力学的随机微分方程。此外,该方法还具备在可获得时融入物理信息先验的灵活性。我们通过三个具有挑战性的测试问题进行了数值研究,结果表明:相较于现有方法,该方法对未见参数组合和激励具有优异的泛化能力,并实现了显著的效率提升。