Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs), due to complex geometries, interactions between physical variables, and the lack of large amounts of high-resolution training data. To address these issues, we propose Codomain Attention Neural Operator (CoDA-NO), which tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems. Specifically, we extend positional encoding, self-attention, and normalization layers to the function space. CoDA-NO can learn representations of different PDE systems with a single model. We evaluate CoDA-NO's potential as a backbone for learning multiphysics PDEs over multiple systems by considering few-shot learning settings. On complex downstream tasks with limited data, such as fluid flow simulations and fluid-structure interactions, we found CoDA-NO to outperform existing methods on the few-shot learning task by over $36\%$. The code is available at https://github.com/ashiq24/CoDA-NO.
翻译:现有神经算子架构在求解具有耦合偏微分方程(PDE)的多物理场问题时面临挑战,这源于复杂几何构型、物理变量间的相互作用以及缺乏大规模高分辨率训练数据。为解决这些问题,我们提出余域注意力神经算子(CoDA-NO),该算子沿余域(即通道空间)对函数进行标记化处理,从而实现对多个PDE系统的自监督学习或预训练。具体而言,我们将位置编码、自注意力机制和归一化层扩展至函数空间。CoDA-NO能够使用单一模型学习不同PDE系统的表征。通过考虑小样本学习场景,我们评估了CoDA-NO作为跨系统多物理场PDE学习骨干网络的潜力。在数据有限的复杂下游任务(如流体流动模拟和流固耦合问题)中,我们发现CoDA-NO在小样本学习任务上的性能超越现有方法超过36%。代码已开源至https://github.com/ashiq24/CoDA-NO。