Neural operator architectures approximate operators between infinite-dimensional Banach spaces of functions. They are gaining increased attention in computational science and engineering, due to their potential both to accelerate traditional numerical methods and to enable data-driven discovery. As the field is in its infancy basic questions about minimal requirements for universal approximation remain open. It is clear that any general approximation of operators between spaces of functions must be both nonlocal and nonlinear. In this paper we describe how these two attributes may be combined in a simple way to deduce universal approximation. In so doing we unify the analysis of a wide range of neural operator architectures and open up consideration of new ones. A popular variant of neural operators is the Fourier neural operator (FNO). Previous analysis proving universal operator approximation theorems for FNOs resorts to use of an unbounded number of Fourier modes, relying on intuition from traditional analysis of spectral methods. The present work challenges this point of view: (i) the work reduces FNO to its core essence, resulting in a minimal architecture termed the ``averaging neural operator'' (ANO); and (ii) analysis of the ANO shows that even this minimal ANO architecture benefits from universal approximation. This result is obtained based on only a spatial average as its only nonlocal ingredient (corresponding to retaining only a \emph{single} Fourier mode in the special case of the FNO). The analysis paves the way for a more systematic exploration of nonlocality, both through the development of new operator learning architectures and the analysis of existing and new architectures. Numerical results are presented which give insight into complexity issues related to the roles of channel width (embedding dimension) and number of Fourier modes.
翻译:神经算子架构用于逼近无限维巴拿赫函数空间之间的算子。由于其在加速传统数值方法和实现数据驱动发现方面的双重潜力,这类架构在计算科学与工程领域正受到日益广泛的关注。作为尚处萌芽阶段的领域,关于普适逼近基本要求的关键问题仍未解决。显然,任何函数空间之间算子的通用逼近都必须兼具非局部性和非线性。本文阐述了如何以简洁方式融合这两种特性来推导普适逼近定理。通过这一研究,我们统一了广泛神经算子架构的理论分析,并为新型架构的探索开辟了道路。傅里叶神经算子(FNO)是神经算子的主流变体之一。先前证明FNO普适算子逼近定理的研究需借助无限多傅里叶模态,其思路沿袭了谱方法的传统分析范式。本研究对此观点提出挑战:(i)将FNO简化为核心架构,提出名为"平均神经算子"(ANO)的最小化架构;(ii)理论分析表明即使此最小化ANO架构仍具有普适逼近能力。该结论的推导仅以空间平均作为唯一非局部成分(对应FNO特例中仅保留\emph{单个}傅里叶模态的情形)。此项研究为系统探索非局部性奠定了理论基础,既可通过开发新型算子学习架构实现,也能用于分析现有及新兴架构。数值实验结果揭示了通道宽度(嵌入维度)与傅里叶模态数量在复杂度层面的相互作用机制。