This paper presents a graph autoencoder architecture capable of performing projection-based model-order reduction (PMOR) on advection-dominated flows modeled by unstructured meshes. The autoencoder is coupled with the time integration scheme from a traditional deep least-squares Petrov-Galerkin projection and provides the first deployment of a graph autoencoder into a PMOR framework. The presented graph autoencoder is constructed with a two-part process that consists of (1) generating a hierarchy of reduced graphs to emulate the compressive abilities of convolutional neural networks (CNNs) and (2) training a message passing operation at each step in the hierarchy of reduced graphs to emulate the filtering process of a CNN. The resulting framework provides improved flexibility over traditional CNN-based autoencoders because it is extendable to unstructured meshes. To highlight the capabilities of the proposed framework, which is named geometric deep least-squares Petrov-Galerkin (GD-LSPG), we benchmark the method on a one-dimensional Burgers' equation problem with a structured mesh and demonstrate the flexibility of GD-LSPG by deploying it to a two-dimensional Euler equations model that uses an unstructured mesh. The proposed framework provides considerable improvement in accuracy for very low-dimensional latent spaces in comparison with traditional affine projections.
翻译:本文提出了一种图自编码器架构,能够在非结构化网格建模的对流主导流动上执行基于投影的模型降阶。该自编码器与传统深度最小二乘Petrov-Galerkin投影的时间积分方案相结合,首次将图自编码器部署于PMOR框架中。所提出的图自编码器通过两部分构建:(1) 生成层次化的简化图以模拟卷积神经网络(CNN)的压缩能力;(2) 在简化图层次结构的每个步骤中训练消息传递操作以模拟CNN的滤波过程。相较于传统的基于CNN的自编码器,该框架具有更高的灵活性,因其可扩展至非结构化网格。为突显所提框架(命名为几何深度最小二乘Petrov-Galerkin方法,GD-LSPG)的性能,我们在结构化网格的一维Burgers方程问题上对该方法进行基准测试,并通过将其部署于使用非结构化网格的二维Euler方程模型来展示GD-LSPG的灵活性。与传统仿射投影方法相比,该框架在极低维潜在空间上实现了显著的精度的提升。