This paper introduces the Kernel Neural Operator (KNO), a provably convergent operator-learning architecture that utilizes compositions of deep kernel-based integral operators for function-space approximation of operators (maps from functions to functions). The KNO decouples the choice of kernel from the numerical integration scheme (quadrature), thereby naturally allowing for operator learning with explicitly-chosen trainable kernels on irregular geometries. On irregular domains, this allows the KNO to utilize domain-specific quadrature rules. To help ameliorate the curse of dimensionality, we also leverage an efficient dimension-wise factorization algorithm on regular domains. More importantly, the ability to explicitly specify kernels also allows the use of highly expressive, non-stationary, neural anisotropic kernels whose parameters are computed by training neural networks. Numerical results demonstrate that on existing benchmarks the training and test accuracy of KNOs is comparable to or higher than popular operator learning techniques while typically using an order of magnitude fewer trainable parameters, with the more expressive kernels proving important to attaining high accuracy. KNOs thus facilitate low-memory, geometrically-flexible, deep operator learning, while retaining the implementation simplicity and transparency of traditional kernel methods from both scientific computing and machine learning.
翻译:本文介绍了核神经算子(KNO),一种可证明收敛的算子学习架构,它利用基于深度核的积分算子组合来实现算子(从函数到函数的映射)的函数空间逼近。KNO 将核的选择与数值积分方案(求积法)解耦,从而自然地支持在非规则几何上使用显式选择的、可训练的核进行算子学习。在非规则区域上,这使得 KNO 能够利用特定于区域的求积法则。为了缓解维数灾难,我们在规则区域上还采用了一种高效的维度方向分解算法。更重要的是,能够显式指定核也允许使用高度表达、非平稳、神经各向异性核,其参数通过训练神经网络计算得到。数值结果表明,在现有基准测试中,KNO 的训练和测试精度与流行的算子学习技术相当或更高,同时通常使用少一个数量级的可训练参数,其中更具表达力的核被证明对实现高精度至关重要。因此,KNO 促进了低内存、几何灵活、深度算子学习,同时保持了来自科学计算和机器学习的传统核方法在实现上的简洁性和透明度。