A computed approximation of the solution operator to a system of partial differential equations (PDEs) is needed in various areas of science and engineering. Neural operators have been shown to be quite effective at predicting these solution generators after training on high-fidelity ground truth data (e.g. numerical simulations). However, in order to generalize well to unseen spatial domains, neural operators must be trained on an extensive amount of geometrically varying data samples that may not be feasible to acquire or simulate in certain contexts (e.g., patient-specific medical data, large-scale computationally intensive simulations.) We propose that in order to learn a PDE solution operator that can generalize across multiple domains without needing to sample enough data expressive enough for all possible geometries, we can train instead a latent neural operator on just a few ground truth solution fields diffeomorphically mapped from different geometric/spatial domains to a fixed reference configuration. Furthermore, the form of the solutions is dependent on the choice of mapping to and from the reference domain. We emphasize that preserving properties of the differential operator when constructing these mappings can significantly reduce the data requirement for achieving an accurate model due to the regularity of the solution fields that the latent neural operator is training on. We provide motivating numerical experimentation that demonstrates an extreme case of this consideration by exploiting the conformal invariance of the Laplacian
翻译:在科学与工程的多个领域中,需要计算近似求解偏微分方程(PDE)系统的解算子。神经算子已被证明在基于高保真度真实数据(如数值模拟)训练后,能有效预测这类解生成器。然而,为使模型在未见空间域上良好泛化,通常需在大量几何变化的数据样本上训练神经算子,而这在某些情境下(如患者特异性医疗数据、大规模计算密集型模拟)可能难以获取或模拟。本文提出,为学习一种无需采样足够表达所有可能几何形态的数据、即能跨多个域泛化的PDE解算子,可改为训练一个隐式神经算子,该算子仅基于从不同几何/空间域通过微分同胚映射到固定参考构型的少量真实解场进行训练。此外,解的形式取决于出入参考域的映射方式的选择。我们强调,在构建这些映射时保持微分算子的性质,能因隐式神经算子所训练的解场具有正则性,显著降低实现精确模型所需的数据量。我们通过数值实验验证这一思路的动机,其中利用拉普拉斯算子的共形不变性展示了该考量的一个极端案例。