Topological deep learning has emerged for modeling higher-order relational structures beyond pairwise interactions that standard graph neural networks fail to capture. Although combinatorial complexes offer a unified topological framework, most existing topological deep learning methods rely on local message passing via attention mechanisms, which incur quadratic complexity and remain low-dimensional, limiting scalability and rank-aware information aggregation in higher-order complexes.We propose Combinatorial Complex Mamba (CCMamba), the first unified mamba-based neural framework for learning on combinatorial complexes. CCMamba reformulates message passing as a selective state-space modeling problem by organizing multi-rank incidence relations into structured sequences processed by rank-aware state-space models. This enables adaptive, directional, and long range information propagation in linear time without self attention. We further establish the theoretical analysis that the expressive power upper-bound of CCMamba message passing is the 1-Weisfeiler-Lehman test. Experiments on graph, hypergraph, and simplicial benchmarks demonstrate that CCMamba consistently outperforms existing methods while exhibiting improved scalability and robustness to depth.
翻译:拓扑深度学习已兴起用于建模超越标准图神经网络无法捕捉的成对交互的高阶关系结构。尽管组合复形提供了一个统一的拓扑框架,但大多数现有的拓扑深度学习方法依赖于通过注意力机制进行的局部消息传递,这带来了二次复杂度且保持低维性,限制了高阶复形中的可扩展性和秩感知信息聚合。我们提出了组合复形Mamba(CCMamba),这是首个基于Mamba的统一神经框架,用于在组合复形上进行学习。CCMamba通过将多秩关联关系组织成由秩感知状态空间模型处理的结构化序列,将消息传递重新表述为一个选择性状态空间建模问题。这使得无需自注意力即可在线性时间内实现自适应的、定向的和长程的信息传播。我们进一步建立了理论分析,证明CCMamba消息传递的表达力上限是1-Weisfeiler-Lehman测试。在图、超图和单纯复形基准上的实验表明,CCMamba在展现改进的可扩展性和对深度的鲁棒性的同时,始终优于现有方法。