In this work, we propose a set of physics-informed geometric operators (GOs) to enrich the geometric data provided for training surrogate/discriminative models, dimension reduction, and generative models, typically employed for performance prediction, dimension reduction, and creating data-driven parameterisations, respectively. However, as both the input and output streams of these models consist of low-level shape representations, they often fail to capture shape characteristics essential for performance analyses. Therefore, the proposed GOs exploit the differential and integral properties of shapes--accessed through Fourier descriptors, curvature integrals, geometric moments, and their invariants--to infuse high-level intrinsic geometric information and physics into the feature vector used for training, even when employing simple model architectures or low-level parametric descriptions. We showed that for surrogate modelling, along with the inclusion of the notion of physics, GOs enact regularisation to reduce over-fitting and enhance generalisation to new, unseen designs. Furthermore, through extensive experimentation, we demonstrate that for dimension reduction and generative models, incorporating the proposed GOs enriches the training data with compact global and local geometric features. This significantly enhances the quality of the resulting latent space, thereby facilitating the generation of valid and diverse designs. Lastly, we also show that GOs can enable learning parametric sensitivities to a great extent. Consequently, these enhancements accelerate the convergence rate of shape optimisers towards optimal solutions.
翻译:本研究提出了一套物理信息几何算子,旨在丰富用于训练代理/判别模型、降维模型及生成模型的几何数据。这些模型通常分别用于性能预测、降维以及创建数据驱动的参数化表示。然而,由于这些模型的输入与输出流均采用低层级形状表示,往往难以捕捉对性能分析至关重要的形状特征。因此,所提出的几何算子通过傅里叶描述子、曲率积分、几何矩及其不变量来挖掘形状的微分与积分特性,从而将高层级内在几何信息与物理原理注入训练所用的特征向量中——即使在采用简单模型架构或低层级参数化描述时亦然。我们证明,在代理建模中,几何算子结合物理概念的引入能够实施正则化以减少过拟合,并增强对未见新设计的泛化能力。此外,通过大量实验,我们验证了在降维与生成模型中融入所提出的几何算子,能够以紧凑的全局与局部几何特征丰富训练数据。这显著提升了所得潜在空间的质量,从而促进有效且多样化设计的生成。最后,我们还表明几何算子能够在很大程度上实现参数敏感性的学习。这些改进共同加速了形状优化器向最优解的收敛速率。