Multiscale problems are ubiquitous in physics. Numerical simulations of such problems by solving partial differential equations (PDEs) at high resolution are computationally too expensive for many-query scenarios, e.g., uncertainty quantification, remeshing applications, topology optimization, and so forth. This limitation has motivated the application of data-driven surrogate models, where the microscale computations are $\textit{substituted}$ with a surrogate, usually acting as a black-box mapping between macroscale quantities. These models offer significant speedups but struggle with incorporating microscale physical constraints, such as the balance of linear momentum and constitutive models. In this contribution, we propose Equilibrium Neural Operator (EquiNO) as a $\textit{complementary}$ physics-informed PDE surrogate for predicting microscale physics and compare it with variational physics-informed neural and operator networks. Our framework, applicable to the so-called multiscale FE$^{\,2}\,$ computations, introduces the FE-OL approach by integrating the finite element (FE) method with operator learning (OL). We apply the proposed FE-OL approach to quasi-static problems of solid mechanics. The results demonstrate that FE-OL can yield accurate solutions even when confronted with a restricted dataset during model development. Our results show that EquiNO achieves speedup factors exceeding 8000-fold compared to traditional methods and offers an optimal balance between data-driven and physics-based strategies.
翻译:多尺度问题在物理学中普遍存在。通过求解高分辨率偏微分方程(PDE)对此类问题进行数值模拟,在不确定性量化、重网格应用、拓扑优化等诸多查询场景中计算成本过高。这一局限性推动了数据驱动代理模型的应用,其中微观尺度计算被一个代理模型所替代,该代理通常充当宏观量之间的黑箱映射。这些模型能显著加速计算,但在融入微观物理约束(如线性动量平衡和本构模型)方面存在困难。本文提出平衡神经算子(EquiNO)作为一种互补的、基于物理信息的PDE代理模型,用于预测微观物理现象,并与基于变分原理的物理信息神经网络及算子网络进行比较。我们的框架适用于所谓的多尺度FE$^{\,2}\,$计算,通过将有限元(FE)方法与算子学习(OL)相结合,提出了FE-OL方法。我们将所提出的FE-OL方法应用于固体力学的准静态问题。结果表明,即使在模型开发阶段面临有限数据集的情况下,FE-OL仍能产生精确解。我们的结果显示,与传统方法相比,EquiNO实现了超过8000倍的加速,并在数据驱动与基于物理的策略之间取得了最佳平衡。