Computationally efficient surrogates for parametrized physical models play a crucial role in science and engineering. Operator learning provides data-driven surrogates that map between function spaces. However, instead of full-field measurements, often the available data are only finite-dimensional parametrizations of model inputs or finite observables of model outputs. Building on Fourier Neural Operators, this paper introduces the Fourier Neural Mappings (FNMs) framework that is able to accommodate such finite-dimensional vector inputs or outputs. The paper develops universal approximation theorems for the method. Moreover, in many applications the underlying parameter-to-observable (PtO) map is defined implicitly through an infinite-dimensional operator, such as the solution operator of a partial differential equation. A natural question is whether it is more data-efficient to learn the PtO map end-to-end or first learn the solution operator and subsequently compute the observable from the full-field solution. A theoretical analysis of Bayesian nonparametric regression of linear functionals, which is of independent interest, suggests that the end-to-end approach can actually have worse sample complexity. Extending beyond the theory, numerical results for the FNM approximation of three nonlinear PtO maps demonstrate the benefits of the operator learning perspective that this paper adopts.
翻译:参数化物理模型的计算高效代理在科学与工程中扮演着关键角色。算子学习提供了在函数空间之间映射的数据驱动代理模型。然而,可用数据往往并非全场测量值,而仅是模型输入的有限维参数化表示或模型输出的有限维观测量。本文基于傅里叶神经算子,提出了傅里叶神经映射框架,该框架能够兼容此类有限维向量输入或输出。论文为此方法建立了通用逼近定理。此外,在许多应用中,底层的参数到可观测量映射是通过无限维算子(如偏微分方程的解算子)隐式定义的。一个自然的问题是:端到端地学习参数到可观测量映射,还是先学习解算子再从全场解计算观测量,哪种方式更具数据效率?对线性泛函的贝叶斯非参数回归的理论分析(该分析本身具有独立价值)表明,端到端方法实际上可能具有更差的样本复杂度。超越理论分析,针对三个非线性参数到可观测量映射的傅里叶神经映射近似数值结果,验证了本文所采用算子学习视角的优势。