Representation learning for high-dimensional, complex physical systems aims to identify a low-dimensional intrinsic latent space, which is crucial for reduced-order modeling and modal analysis. To overcome the well-known Kolmogorov barrier, deep autoencoders (AEs) have been introduced in recent years, but they often suffer from poor convergence behavior as the rank of the latent space increases. To address this issue, we propose the learnable weighted hybrid autoencoder, a hybrid approach that combines the strengths of singular value decomposition (SVD) with deep autoencoders through a learnable weighted framework. We find that the introduction of learnable weighting parameters is essential - without them, the resulting model would either collapse into a standard POD or fail to exhibit the desired convergence behavior. Additionally, we empirically find that our trained model has a sharpness thousands of times smaller compared to other models. Our experiments on classical chaotic PDE systems, including the 1D Kuramoto-Sivashinsky and forced isotropic turbulence datasets, demonstrate that our approach significantly improves generalization performance compared to several competing methods, paving the way for robust representation learning of high-dimensional, complex physical systems.
翻译:针对高维复杂物理系统的表示学习旨在识别一个低维本征潜在空间,这对于降阶建模与模态分析至关重要。为克服著名的Kolmogorov障碍,近年来引入了深度自编码器,但随着潜在空间秩的增加,它们常表现出较差的收敛性。为解决此问题,我们提出可学习加权混合自编码器,该方法通过可学习的加权框架,将奇异值分解的优势与深度自编码器相结合。我们发现引入可学习的加权参数至关重要——若缺少这些参数,所得模型要么退化为标准本征正交分解,要么无法展现预期的收敛行为。此外,我们通过实验发现,相比其他模型,我们训练所得模型的锐度要小数千倍。在一维Kuramoto-Sivashinsky方程和受迫各向同性湍流数据集等经典混沌偏微分方程系统上的实验表明,相较于多种竞争方法,我们的方法显著提升了泛化性能,为高维复杂物理系统的鲁棒表示学习开辟了新途径。