Neural operator architectures employ neural networks to approximate operators mapping between Banach spaces of functions; they may be used to accelerate model evaluations via emulation, or to discover models from data. Consequently, the methodology has received increasing attention over recent years, giving rise to the rapidly growing field of operator learning. The first contribution of this paper is to prove that for general classes of operators which are characterized only by their $C^r$- or Lipschitz-regularity, operator learning suffers from a ``curse of parametric complexity'', which is an infinite-dimensional analogue of the well-known curse of dimensionality encountered in high-dimensional approximation problems. The result is applicable to a wide variety of existing neural operators, including PCA-Net, DeepONet and the FNO. The second contribution of the paper is to prove that this general curse can be overcome for solution operators defined by the Hamilton-Jacobi equation; this is achieved by leveraging additional structure in the underlying solution operator, going beyond regularity. To this end, a novel neural operator architecture is introduced, termed HJ-Net, which explicitly takes into account characteristic information of the underlying Hamiltonian system. Error and complexity estimates are derived for HJ-Net which show that this architecture can provably beat the curse of parametric complexity related to the infinite-dimensional input and output function spaces.
翻译:神经算子架构利用神经网络近似映射于巴拿赫函数空间之间的算子;它们可通过仿真加速模型评估,或从数据中发现模型。因此,该方法论近年来受到日益关注,催生了快速发展的算子学习领域。本文的第一个贡献是证明:对于仅由$C^r$正则性或利普希茨正则性刻画的一般算子类,算子学习存在“参数复杂度诅咒”——这是高维逼近问题中著名的维度诅咒的无穷维类比。该结论适用于包括PCA-Net、DeepONet和FNO在内的多种现有神经算子。本文的第二个贡献是证明:对于由哈密顿-雅可比方程定义的解算子,这种普遍诅咒可通过利用底层解算子超越正则性的额外结构得以克服。为此,本文引入了一种新型神经算子架构——HJ-Net,它显式考虑了底层哈密顿系统的特征信息。针对HJ-Net推导了误差与复杂度估计,表明该架构可证明性地克服与无穷维输入输出函数空间相关的参数复杂度诅咒。