We investigate reduced-order models for acoustic and electromagnetic wave problems in parametrically defined domains. The parameter-to-solution maps are approximated following the so-called Galerkin POD-NN method, which combines the construction of a reduced basis via proper orthogonal decomposition (POD) with neural networks (NNs). As opposed to the standard reduced basis method, this approach allows for the swift and efficient evaluation of reduced-order solutions for any given parametric input. As is customary in the analysis of problems in random or parametrically defined domains, we start by transporting the formulation to a reference domain. This yields a parameter-dependent variational problem set on parameter-independent functional spaces. In particular, we consider affine-parametric domain transformations characterized by a high-dimensional, possibly countably infinite, parametric input. To keep the number of evaluations of the high-fidelity solutions manageable, we propose using low-discrepancy sequences to sample the parameter space efficiently. Then, we train an NN to learn the coefficients in the reduced representation. This approach completely decouples the offline and online stages of the reduced basis paradigm. Numerical results for the three-dimensional Helmholtz and Maxwell equations confirm the method's accuracy up to a certain barrier and show significant gains in online speed-up compared to the traditional Galerkin POD method.
翻译:本文研究了参数化定义域中声学和电磁波问题的降阶模型。参数到解的映射采用所谓的 Galerkin POD-NN 方法进行逼近,该方法通过本征正交分解(POD)与神经网络(NN)相结合来构建降阶基。与标准的降阶基方法不同,此方法能够快速高效地评估任意给定参数输入下的降阶解。按照随机或参数化定义域问题分析的惯例,我们首先将公式转换到参考域。这产生了一个定义在参数无关函数空间上的参数依赖变分问题。特别地,我们考虑以高维(可能可数无限)参数输入为特征的仿射参数化域变换。为使高保真解的计算次数保持在可管理范围内,我们建议使用低差异序列对参数空间进行高效采样。随后,我们训练一个神经网络来学习降阶表示中的系数。该方法完全解耦了降阶基范式中的离线与在线阶段。针对三维 Helmholtz 方程和 Maxwell 方程的数值结果验证了该方法在一定精度范围内的准确性,并显示出相较于传统 Galerkin POD 方法在线加速方面的显著提升。