Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs) due to complex geometries, interactions between physical variables, and the limited amounts of high-resolution training data. To address these issues, we propose Codomain Attention Neural Operator (CoDA-NO), which tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems. Specifically, we extend positional encoding, self-attention, and normalization layers to function spaces. CoDA-NO can learn representations of different PDE systems with a single model. We evaluate CoDA-NO's potential as a backbone for learning multiphysics PDEs over multiple systems by considering few-shot learning settings. On complex downstream tasks with limited data, such as fluid flow simulations, fluid-structure interactions, and Rayleigh-B\'enard convection, we found CoDA-NO to outperform existing methods by over 36%.
翻译:现有的神经算子架构在求解具有复杂几何结构、物理变量间相互作用以及高分辨率训练数据有限的多物理场耦合偏微分方程时面临挑战。为解决这些问题,我们提出了值域注意力神经算子,该模型沿值域或通道空间对函数进行标记化,实现了对多个偏微分方程系统的自监督学习或预训练。具体而言,我们将位置编码、自注意力机制和归一化层扩展至函数空间。值域注意力神经算子能够通过单一模型学习不同偏微分方程系统的表示。我们通过考虑少样本学习场景,评估了值域注意力神经算子作为多系统多物理场偏微分方程学习骨干网络的潜力。在数据有限的下游复杂任务中,如流体流动模拟、流固耦合和瑞利-贝纳德对流,我们发现值域注意力神经算子的性能超越现有方法超过36%。