Neural Operators (NOs) have emerged as powerful tools for learning mappings between function spaces. Among them, the kernel integral operator has been widely used in universally approximating architectures. Following the original formulation, most advancements focus on designing better parameterizations for the kernel over the original physical domain (with $d$ spatial dimensions, $d\in{1,2,3,\ldots}$). In contrast, embedding evolution remains largely unexplored, which often drives models toward brute-force embedding lengthening to improve approximation, but at the cost of substantially increased computation. In this paper, we introduce an auxiliary dimension that explicitly models embedding evolution in operator form, thereby redefining the NO framework in $d+1$ dimensions (the original $d$ dimensions plus one auxiliary dimension). Under this formulation, we develop a Schrödingerised Kernel Neural Operator (SKNO), which leverages Fourier-based operators to model the $d+1$ dimensional evolution. Across more than ten increasingly challenging benchmarks, ranging from the 1D heat equation to the highly nonlinear 3D Rayleigh-Taylor instability, SKNO consistently outperforms other baselines. We further validate its resolution invariance under mixed-resolution training and super-resolution inference, and evaluate zero-shot generalization to unseen temporal regimes. In addition, we present a broader set of design choices for the lifting and recovery operators, demonstrating their impact on SKNO's predictive performance.
翻译:神经算子已成为学习函数空间之间映射的强大工具。其中,核积分算子在通用逼近架构中得到了广泛应用。遵循原始表述,大多数进展侧重于在原始物理域(具有$d$个空间维度,$d\in{1,2,3,\ldots}$)上为核设计更好的参数化方法。相比之下,嵌入演化在很大程度上仍未得到充分探索,这通常导致模型倾向于通过暴力延长嵌入长度来改善逼近效果,但代价是计算量大幅增加。本文引入一个辅助维度,以算子形式显式建模嵌入演化,从而在$d+1$维(原始$d$维加上一个辅助维度)中重新定义了神经算子框架。在此表述下,我们开发了一种薛定谔化核神经算子,该算子利用基于傅里叶的算子来建模$d+1$维演化。在从一维热方程到高度非线性的三维瑞利-泰勒不稳定性等十多个难度递增的基准测试中,SKNO始终优于其他基线方法。我们进一步验证了其在混合分辨率训练和超分辨率推理下的分辨率不变性,并评估了其在未见时间域上的零样本泛化能力。此外,我们提出了一套更广泛的提升与恢复算子设计方案,并论证了它们对SKNO预测性能的影响。