This work presents a novel resolution-invariant model order reduction strategy for multifidelity applications. We base our architecture on a novel neural network layer developed in this work, the graph feedforward network, which extends the concept of feedforward networks to graph-structured data by creating a direct link between the weights of a neural network and the nodes of a mesh, enhancing the interpretability of the network. We exploit the method's capability of training and testing on different mesh sizes in an autoencoder-based reduction strategy for parametrised partial differential equations. We show that this extension comes with provable guarantees on the performance via error bounds. The capabilities of the proposed methodology are tested on three challenging benchmarks, including advection-dominated phenomena and problems with a high-dimensional parameter space. The method results in a more lightweight and highly flexible strategy when compared to state-of-the-art models, while showing excellent generalisation performance in both single fidelity and multifidelity scenarios.
翻译:本文提出了一种新颖的用于多保真度应用的分辨率不变模型降阶策略。我们的架构基于本工作中开发的一种新型神经网络层——图前馈网络,该网络通过建立神经网络权重与网格节点之间的直接联系,将前馈网络的概念扩展到图结构数据,从而增强了网络的可解释性。我们在基于自编码器的参数化偏微分方程降阶策略中,利用了该方法能够在不同网格尺寸上进行训练和测试的能力。我们证明了该扩展通过误差界具有可证明的性能保证。所提方法的能力在三个具有挑战性的基准测试中得到了验证,包括平流主导现象和高维参数空间问题。与最先进的模型相比,该方法产生了一种更轻量且高度灵活的策略,同时在单保真度和多保真度场景中均表现出优异的泛化性能。