Spatiotemporal modeling is critical for understanding complex systems across various scientific and engineering disciplines, but governing equations are often not fully known or computationally intractable due to inherent system complexity. Data-driven reduced-order models (ROMs) offer a promising approach for fast and accurate spatiotemporal forecasting by computing solutions in a compressed latent space. However, these models often neglect temporal correlations between consecutive snapshots when constructing the latent space, leading to suboptimal compression, jagged latent trajectories, and limited extrapolation ability over time. To address these issues, this paper introduces a continuous operator learning framework that incorporates jerk regularization into the learning of the compressed latent space. This jerk regularization promotes smoothness and sparsity of latent space dynamics, which not only yields enhanced accuracy and convergence speed but also helps identify intrinsic latent space coordinates. Consisting of an implicit neural representation (INR)-based autoencoder and a neural ODE latent dynamics model, the framework allows for inference at any desired spatial or temporal resolution. The effectiveness of this framework is demonstrated through a two-dimensional unsteady flow problem governed by the Navier-Stokes equations, highlighting its potential to expedite high-fidelity simulations in various scientific and engineering applications.
翻译:时空建模对于理解科学和工程领域中复杂系统至关重要,然而由于系统固有的复杂性,控制方程往往无法完全获知或因计算量过大而难以处理。数据驱动的降阶模型通过在压缩潜在空间中求解动力学过程,为快速准确的时空预测提供了具有前景的方案。然而,现有模型在构建潜在空间时常常忽略连续快照之间的时序相关性,导致压缩效果欠佳、潜在轨迹出现剧烈波动且随时间外推能力受限。针对这些问题,本文提出一种连续算子学习框架,在压缩潜在空间的学习过程中引入加加速度正则化。该正则化方法能促进潜在空间动力学的平滑性与稀疏性,不仅提升了精度与收敛速度,还有助于识别潜在空间的固有坐标。该框架由基于隐式神经表征的自编码器与神经ODE潜在动力学模型构成,支持在任意时空分辨率下进行推断。通过纳维-斯托克斯方程控制的三维非定常流动问题验证了框架的有效性,展现了其在加速各类科学与工程领域高保真仿真中的潜力。