Neural operators effectively solve PDE problems from data without knowing the explicit equations, which learn the map from the input sequences of observed samples to the predicted values. Most existed works build the model in the original geometric space, leading to high computational costs when the number of sample points is large. We present the Latent Neural Operator (LNO) solving PDEs in the latent space. In particular, we first propose Physics-Cross-Attention (PhCA) transforming representation from the geometric space to the latent space, then learn the operator in the latent space, and finally recover the real-world geometric space via the inverse PhCA map. Our model retains flexibility that can decode values in any position not limited to locations defined in training set, and therefore can naturally perform interpolation and extrapolation tasks particularly useful for inverse problems. Moreover, the proposed LNO improves in both prediction accuracy and computational efficiency. Experiments show that LNO reduces the GPU memory by 50%, speeds up training 1.8 times, and reaches state-of-the-art accuracy on four out of six benchmarks for forward problems and a benchmark for inverse problem.
翻译:神经算子能够在不依赖显式方程的情况下,通过数据有效求解偏微分方程问题,学习从观测样本输入序列到预测值的映射。现有工作大多在原始几何空间中构建模型,当样本点数量较大时会导致高计算成本。我们提出隐式神经算子(LNO)在隐空间中求解偏微分方程。具体而言,我们首先提出物理交叉注意力机制(PhCA)将表示从几何空间转换到隐空间,随后在隐空间中学习算子,最后通过逆PhCA映射恢复真实几何空间。该模型保持灵活性,可解码任意位置的值(不限于训练集定义的位置),因此能自然执行内插和外推任务,这对反问题尤为有用。此外,所提出的LNO在预测精度和计算效率方面均得到提升。实验表明,LNO将GPU内存减少50%,训练速度提升1.8倍,并在六个正问题基准测试中的四个及一个反问题基准测试中达到最先进精度。