In this work, we present the novel mathematical framework of latent dynamics models (LDMs) for reduced order modeling of parameterized nonlinear time-dependent PDEs. Our framework casts this latter task as a nonlinear dimensionality reduction problem, while constraining the latent state to evolve accordingly to an (unknown) dynamical system. A time-continuous setting is employed to derive error and stability estimates for the LDM approximation of the full order model (FOM) solution. We analyze the impact of using an explicit Runge-Kutta scheme in the time-discrete setting, resulting in the $\Delta\text{LDM}$ formulation, and further explore the learnable setting, $\Delta\text{LDM}_\theta$, where deep neural networks approximate the discrete LDM components, while providing a bounded approximation error with respect to the FOM. Moreover, we extend the concept of parameterized Neural ODE - recently proposed as a possible way to build data-driven dynamical systems with varying input parameters - to be a convolutional architecture, where the input parameters information is injected by means of an affine modulation mechanism, while designing a convolutional autoencoder neural network able to retain spatial-coherence, thus enhancing interpretability at the latent level. Numerical experiments, including the Burgers' and the advection-reaction-diffusion equations, demonstrate the framework's ability to obtain, in a multi-query context, a time-continuous approximation of the FOM solution, thus being able to query the LDM approximation at any given time instance while retaining a prescribed level of accuracy. Our findings highlight the remarkable potential of the proposed LDMs, representing a mathematically rigorous framework to enhance the accuracy and approximation capabilities of reduced order modeling for time-dependent parameterized PDEs.
翻译:本文提出了一种新颖的隐式动力学模型(LDM)数学框架,用于参数化非线性时变偏微分方程的降阶建模。该框架将建模任务转化为非线性降维问题,同时约束隐式状态按照(未知)动力系统演化。采用时间连续设定推导了全阶模型(FOM)解的LDM逼近的误差与稳定性估计。我们分析了在时间离散设定中使用显式Runge-Kutta格式的影响,由此得到$\Delta\text{LDM}$公式,并进一步探索可学习设定$\Delta\text{LDM}_\theta$——其中深度神经网络逼近离散LDM分量,同时提供相对于FOM的有界逼近误差。此外,我们将参数化神经常微分方程(近期被提出作为构建具有变化输入参数的数据驱动动力系统的可能方法)扩展为卷积架构:通过仿射调制机制注入输入参数信息,同时设计能够保持空间一致性的卷积自编码器神经网络,从而增强隐式层面的可解释性。数值实验(包括Burgers方程和对流-反应-扩散方程)表明,该框架能够在多查询场景下获得FOM解的时间连续逼近,从而可在任意给定时间点查询LDM逼近,同时保持预设的精度水平。我们的研究结果突显了所提出LDM框架的显著潜力,其作为一个数学严谨的框架,能够提升时变参数化偏微分方程降阶建模的精度与逼近能力。