Reduced-order modeling lies at the interface of numerical analysis and data-driven scientific computing, providing principled ways to compress high-fidelity simulations in science and engineering. We propose a training framework that couples a continuous-time form of operator inference with the adjoint-state method to obtain robust data-driven reduced-order models. This method minimizes a trajectory-based loss between reduced-order solutions and projected snapshot data, which removes the need to estimate time derivatives from noisy measurements and provides intrinsic temporal regularization through time integration. We derive the corresponding continuous adjoint equations to compute gradients efficiently and implement a gradient based optimizer to update the reduced model parameters. Each iteration only requires one forward reduced order solve and one adjoint solve, followed by inexpensive gradient assembly, making the method attractive for large-scale simulations. We validate the proposed method on three partial differential equations: viscous Burgers' equation, the two-dimensional Fisher-KPP equation, and an advection-diffusion equation. We perform systematic comparisons against standard operator inference under two perturbation regimes, namely reduced temporal snapshot density and additive Gaussian noise. For clean data, both approaches deliver similar accuracy, but in situations with sparse sampling and noise, the proposed adjoint-based training provides better accuracy and enhanced roll-out stability.
翻译:降阶建模处于数值分析与数据驱动科学计算的交叉领域,为科学与工程中的高保真仿真提供了系统化的压缩方法。本文提出一种训练框架,将连续时间形式的算子推断与伴随状态法相结合,以构建鲁棒的数据驱动降阶模型。该方法通过最小化降阶解与投影快照数据之间的轨迹损失,避免了从含噪测量值估计时间导数的需求,并借助时间积分提供了固有的时间正则化。我们推导了相应的连续伴随方程以实现高效梯度计算,并采用基于梯度的优化器更新降阶模型参数。每次迭代仅需一次降阶模型前向求解和一次伴随求解,辅以低成本的梯度组装,使得该方法适用于大规模仿真。我们在三个偏微分方程上验证了所提方法:粘性Burgers方程、二维Fisher-KPP方程以及平流-扩散方程。通过系统对比标准算子推断在两种扰动机制(即时间快照密度降低与加性高斯噪声)下的表现,发现对于洁净数据两种方法精度相当,但在稀疏采样与噪声干扰场景中,本文提出的基于伴随的训练方法展现出更优的精度与更强的推演稳定性。