Topological deep learning has emerged as a powerful paradigm for modeling higher-order relational structures beyond pairwise interactions that standard graph neural networks fail to capture. While combinatorial complexes (CCs) offer a unified topological foundation for the higher-order graph learning, existing topological deep learning methods rely heavily on local message passing and attention mechanisms. These suffer from quadratic complexity and local neighborhood constraints, limiting their scalability and capacity for rank-aware, long-range dependency modeling. To overcome these challenges, we propose Combinatorial Complex Mamba (CCMamba), the first unified Mamba-based neural framework for learning on combinatorial complexes. CCMamba reformulates higher-order message passing as a selective state-space modeling problem by linearizing multi-rank incidence relations into structured, rank-aware sequences. This architecture enables adaptive, directional, and long-range information propagation in linear time bypassing the scalability bottlenecks of self-attention. Theoretically, we further establish that the expressive power of CCMamba is upper-bounded by the 1-dimensional combinatorial complex Weisfeiler-Lehman (1-CCWL) test. Extensive experiments across graph, hypergraph, and simplicial benchmarks demonstrate that CCMamba consistently outperforms existing methods while exhibiting superior scalability and remarkable robustness against over-smoothing in deep architectures.
翻译:拓扑深度学习已成为建模超越标准图神经网络所能捕捉的成对交互的高阶关系结构的强大范式。尽管组合复形为高阶图学习提供了统一的拓扑基础,但现有的拓扑深度学习方法严重依赖于局部消息传递和注意力机制。这些方法存在二次复杂度和局部邻域约束的问题,限制了其可扩展性以及进行秩感知、长程依赖建模的能力。为克服这些挑战,我们提出了组合复形Mamba(CCMamba),这是首个基于Mamba的、用于组合复形学习的统一神经框架。CCMamba通过将多秩关联关系线性化为结构化的、秩感知的序列,将高阶消息传递重新表述为一个选择性状态空间建模问题。该架构能够在线性时间内实现自适应的、定向的、长程的信息传播,从而绕过了自注意力机制的可扩展性瓶颈。理论上,我们进一步证明CCMamba的表达能力上限由一维组合复形Weisfeiler-Lehman(1-CCWL)测试所界定。在图、超图和单纯复形基准测试上进行的大量实验表明,CCMamba始终优于现有方法,同时展现出卓越的可扩展性,并在深度架构中表现出对过度平滑现象显著的鲁棒性。