In recent years, deep learning has gained increasing popularity in the fields of Partial Differential Equations (PDEs) and Reduced Order Modeling (ROM), providing domain practitioners with new powerful data-driven techniques such as Physics-Informed Neural Networks (PINNs), Neural Operators, Deep Operator Networks (DeepONets) and Deep-Learning based ROMs (DL-ROMs). In this context, deep autoencoders based on Convolutional Neural Networks (CNNs) have proven extremely effective, outperforming established techniques, such as the reduced basis method, when dealing with complex nonlinear problems. However, despite the empirical success of CNN-based autoencoders, there are only a few theoretical results supporting these architectures, usually stated in the form of universal approximation theorems. In particular, although the existing literature provides users with guidelines for designing convolutional autoencoders, the subsequent challenge of learning the latent features has been barely investigated. Furthermore, many practical questions remain unanswered, e.g., the number of snapshots needed for convergence or the neural network training strategy. In this work, using recent techniques from sparse high-dimensional function approximation, we fill some of these gaps by providing a new practical existence theorem for CNN-based autoencoders when the parameter-to-solution map is holomorphic. This regularity assumption arises in many relevant classes of parametric PDEs, such as the parametric diffusion equation, for which we discuss an explicit application of our general theory.
翻译:近年来,深度学习在偏微分方程和降阶建模领域日益受到关注,为领域从业者提供了物理信息神经网络、神经算子、深度算子网络和基于深度学习的降阶模型等新型数据驱动技术。在此背景下,基于卷积神经网络的深度自编码器在处理复杂非线性问题时表现出卓越性能,超越了传统降阶基方法等成熟技术。然而,尽管基于卷积神经网络的自编码器取得了实证成功,目前仅有少量理论成果支持此类架构,且通常以通用逼近定理的形式呈现。特别值得注意的是,现有文献虽为卷积自编码器的设计提供了指导原则,但对潜在特征学习这一后续挑战的研究尚不充分。此外,许多实际问题仍未得到解答,例如收敛所需的快照数量或神经网络训练策略。本研究运用高维稀疏函数逼近的最新方法,针对参数-解映射满足全纯性的情形,提出了基于卷积神经网络的自编码器实用存在性定理,填补了该领域的部分理论空白。此类正则性假设广泛存在于参数化偏微分方程的重要类别中,我们以参数扩散方程为例,具体展示了该通用理论的实际应用。