Partial differential equations (PDEs) are widely used to model complex physical systems, but solving them efficiently remains a significant challenge. Recently, Transformers have emerged as the preferred architecture for PDEs due to their ability to capture intricate dependencies. However, they struggle with representing continuous dynamics and long-range interactions. To overcome these limitations, we introduce the Mamba Neural Operator (MNO), a novel framework that enhances neural operator-based techniques for solving PDEs. MNO establishes a formal theoretical connection between structured state-space models (SSMs) and neural operators, offering a unified structure that can adapt to diverse architectures, including Transformer-based models. By leveraging the structured design of SSMs, MNO captures long-range dependencies and continuous dynamics more effectively than traditional Transformers. Through extensive analysis, we show that MNO significantly boosts the expressive power and accuracy of neural operators, making it not just a complement but a superior framework for PDE-related tasks, bridging the gap between efficient representation and accurate solution approximation.
翻译:偏微分方程(PDEs)被广泛用于建模复杂物理系统,但如何高效求解仍是一个重大挑战。近年来,Transformer凭借其捕捉复杂依赖关系的能力,已成为求解PDEs的首选架构。然而,该模型在表示连续动力学和长程相互作用方面存在困难。为克服这些局限,本文提出Mamba神经算子(MNO)——一种增强基于神经算子的PDE求解技术的新颖框架。MNO在结构化状态空间模型(SSMs)与神经算子之间建立了正式的理论联系,提供了一个能适应多种架构(包括基于Transformer的模型)的统一结构。通过利用SSMs的结构化设计,MNO比传统Transformer更有效地捕捉长程依赖与连续动力学特性。大量分析表明,MNO显著提升了神经算子的表达能力与精度,使其不仅是PDE相关任务的补充框架,更成为一种优越的解决方案,弥合了高效表示与精确求解近似之间的鸿沟。