Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account different types of simulation datasets. This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar. Whereas the flexibility of transformers has enabled unified architectures across domains, neural operators mostly follow a problem specific design, where GNNs are commonly used for Lagrangian simulations and grid-based models predominate Eulerian simulations. We introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles. UPTs efficiently propagate dynamics in the latent space, emphasized by inverse encoding and decoding techniques. Finally, UPTs allow for queries of the latent space representation at any point in space-time. We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations, and Lagrangian-based dynamics.
翻译:神经算子作为物理替代模型,近期受到越来越多的关注。随着问题复杂度的持续攀升,一个自然问题随之出现:如何高效地将神经算子扩展到更大规模、更复杂的仿真中——尤为重要的是,需考虑不同类型的仿真数据集。这一问题具有特殊意义,因为尽管系统底层动力学相似,不同应用场景中仍会采用不同技术(这与传统数值方法的情况类似)。尽管变换器的灵活性已实现跨领域的统一架构,但神经算子大多仍遵循问题特定设计:图神经网络常被用于拉格朗日仿真,而基于网格的模型则主导欧拉仿真。我们提出通用物理变换器(UPTs),这是一种面向广泛时空问题的高效统一学习范式。UPTs无需依赖网格或粒子隐式结构,从而在网格与粒子场景中兼具灵活性与可扩展性。其通过逆编码与逆解码技术,在隐空间中高效传播动力学特征。最后,UPTs支持在时空任意点对隐空间表征进行查询。我们通过基于网格的流体仿真、稳态雷诺平均纳维-斯托克斯仿真以及拉格朗日动力学仿真,展示了UPTs的广泛适用性与有效性。