We present a novel property-preserving kernel-based operator learning method for incompressible flows governed by the incompressible Navier-Stokes equations. Traditional numerical solvers incur significant computational costs to respect incompressibility. Operator learning offers efficient surrogate models, but current neural operators fail to exactly enforce physical properties such as incompressibility, periodicity, and turbulence. Our method maps input functions to expansion coefficients of output functions in a property-preserving kernel basis, ensuring that predicted velocity fields analytically and simultaneously preserve the aforementioned physical properties. We evaluate the method on challenging 2D and 3D, laminar and turbulent, incompressible flow problems. Our method achieves up to six orders of magnitude lower relative $\ell_2$ errors upon generalization and trains up to five orders of magnitude faster compared to neural operators. Moreover, while our method enforces incompressibility analytically, neural operators exhibit very large deviations. Our results show that our method provides an accurate and efficient surrogate for incompressible flows.
翻译:我们提出了一种新颖的保性质核基算子学习方法,用于求解由不可压缩Navier-Stokes方程控制的不可压缩流动。传统数值求解器为满足不可压缩性需付出高昂计算代价。算子学习提供了高效的代理模型,但现有神经算子方法无法精确保证不可压缩性、周期性和湍流等物理性质。本方法将输入函数映射至输出函数在保性质核基中的展开系数,确保预测的速度场在解析意义上同时保持上述物理性质。我们在具有挑战性的二维与三维、层流与湍流不可压缩流动问题上评估了该方法。与神经算子相比,本方法在泛化时取得了高达六个数量级更低的相对$\ell_2$误差,且训练速度提升了高达五个数量级。此外,本方法在解析层面严格保证不可压缩性,而神经算子则表现出显著偏差。结果表明,本方法为不可压缩流动提供了精确高效的代理模型。