We investigate geometric regularization strategies for learned latent representations in encoder--decoder reduced-order models. In a fixed experimental setting for the advection--diffusion--reaction (ADR) equation, we model latent dynamics using a neural ODE and evaluate four regularization approaches applied during autoencoder pre-training: (a) near-isometry regularization of the decoder Jacobian, (b) a stochastic decoder gain penalty based on random directional gains, (c) a second-order directional curvature penalty, and (d) Stiefel projection of the first decoder layer. Across multiple seeds, we find that (a)--(c) often produce latent representations that make subsequent latent-dynamics training with a frozen autoencoder more difficult, especially for long-horizon rollouts, even when they improve local decoder smoothness or related sensitivity proxies. In contrast, (d) consistently improves conditioning-related diagnostics of the learned latent dynamics and tends to yield better rollout performance. We discuss the hypothesis that, in this setting, the downstream impact of latent-geometry mismatch outweighs the benefits of improved decoder smoothness.
翻译:我们研究了编码器-解码器降阶模型中学习到的潜在表示的几何正则化策略。在平流-扩散-反应方程的一个固定实验设定下,我们使用神经ODE对潜在动力学进行建模,并评估了在自编码器预训练期间应用的四种正则化方法:(a) 解码器雅可比矩阵的近等距正则化,(b) 基于随机方向增益的随机解码器增益惩罚,(c) 二阶方向曲率惩罚,以及(d) 第一解码器层的Stiefel投影。通过多次随机种子实验,我们发现(a)-(c)方法产生的潜在表示通常会使后续使用冻结自编码器进行潜在动力学训练变得更加困难,尤其是在长时程推演中,即使这些方法改善了局部解码器平滑度或相关的敏感性代理指标。相比之下,(d)方法持续改善了学习到的潜在动力学的条件数相关诊断指标,并倾向于获得更好的推演性能。我们讨论了以下假设:在此设定下,潜在几何失配对下游任务的影响超过了改进解码器平滑度所带来的益处。