We propose a new neural network based large eddy simulation framework for the incompressible Navier-Stokes equations based on the paradigm "discretize first, filter and close next". This leads to full model-data consistency and allows for employing neural closure models in the same environment as where they have been trained. Since the LES discretization error is included in the learning process, the closure models can learn to account for the discretization. Furthermore, we employ a divergence-consistent discrete filter defined through face-averaging and provide novel theoretical and numerical filter analysis. This filter preserves the discrete divergence-free constraint by construction, unlike general discrete filters such as volume-averaging filters. We show that using a divergence-consistent LES formulation coupled with a convolutional neural closure model produces stable and accurate results for both a-priori and a-posteriori training, while a general (divergence-inconsistent) LES model requires a-posteriori training or other stability-enforcing measures.
翻译:我们针对不可压缩Navier-Stokes方程提出了一种基于"先离散,后滤波与闭合"范式的新型神经网络大涡模拟框架。该方法实现了完整的模型-数据一致性,使得神经闭合模型能够在与训练环境相同的条件下运行。由于大涡模拟的离散化误差被纳入学习过程,闭合模型能够学习如何处理离散化效应。此外,我们采用通过面平均定义的散度一致性离散滤波器,并提供了新颖的理论与数值滤波器分析。与体积平均滤波器等通用离散滤波器不同,该滤波器在构造上保持了离散无散约束。研究表明,采用散度一致性大涡模拟公式与卷积神经闭合模型相结合,在先验与后验训练中均能产生稳定且精确的结果;而通用(散度不一致)大涡模拟模型则需要后验训练或其他稳定性增强措施。