We develop a novel deep learning technique, termed Deep Orthogonal Decomposition (DOD), for dimensionality reduction and reduced order modeling of parameter dependent partial differential equations. The approach consists in the construction of a deep neural network model that approximates the solution manifold through a continuously adaptive local basis. In contrast to global methods, such as Principal Orthogonal Decomposition (POD), the adaptivity allows the DOD to overcome the Kolmogorov barrier, making the approach applicable to a wide spectrum of parametric problems. Furthermore, due to its hybrid linear-nonlinear nature, the DOD can accommodate both intrusive and nonintrusive techniques, providing highly interpretable latent representations and tighter control on error propagation. For this reason, the proposed approach stands out as a valuable alternative to other nonlinear techniques, such as deep autoencoders. The methodology is discussed both theoretically and practically, evaluating its performances on problems featuring nonlinear PDEs, singularities, and parametrized geometries.
翻译:我们提出了一种新颖的深度学习技术,称为深度正交分解(DOD),用于参数依赖偏微分方程的降维与降阶建模。该方法通过构建深度神经网络模型,利用连续自适应局部基近似解流形。与主正交分解(POD)等全局方法相比,自适应性使DOD能够突破科尔莫戈罗夫屏障,从而适用于广泛的参数问题。此外,由于其混合线性-非线性特性,DOD可兼容侵入式与非侵入式技术,提供高度可解释的潜在表示并更严格地控制误差传播。因此,所提方法成为深度自编码器等非线性技术的重要替代方案。我们从理论与应用两个层面探讨了该方法,并在涉及非线性偏微分方程、奇异性及参数化几何的问题上评估其性能。