Large language models (LLMs) have made significant advances in complex reasoning tasks, yet they remain bottlenecked by two core challenges: architectural inefficiency due to reliance on Transformers, and a lack of structured fine-tuning for high-difficulty domains. We introduce \ourmodel, an attention-free language model that addresses both issues through architectural and data-centric innovations. Built on the state space dual (SSD) layers of Mamba-2, our model eliminates the need for self-attention and key-value caching, enabling fixed-memory, constant-time inference. To train it for complex reasoning, we propose a two-phase curriculum fine-tuning strategy based on the \textsc{PromptCoT} synthesis paradigm, which generates pedagogically structured problems via abstract concept selection and rationale-guided generation. On benchmark evaluations, \ourmodel-7B outperforms strong Transformer and hybrid models of comparable scale, and even surpasses the much larger Gemma3-27B by 2.6\% on AIME 24, 0.6\% on AIME 25, and 3.0\% on Livecodebench. These results highlight the potential of state space models as efficient and scalable alternatives to attention-based architectures for high-capacity reasoning.
翻译:大型语言模型(LLM)在复杂推理任务上取得了显著进展,但仍受限于两个核心挑战:因依赖Transformer而产生的架构低效性,以及针对高难度领域缺乏结构化微调。我们提出了\ourmodel,一种无需注意力的语言模型,通过架构和数据中心的创新解决了这两个问题。该模型基于Mamba-2的状态空间对偶(SSD)层构建,消除了自注意力和键值缓存的需求,实现了固定内存、恒定时间的推理。为了训练其进行复杂推理,我们提出了一种基于\textsc{PromptCoT}合成范式的两阶段课程微调策略,该策略通过抽象概念选择和原理引导生成来产生具有教学结构的问题。在基准评估中,\ourmodel-7B在同等规模下超越了强大的Transformer和混合模型,甚至在AIME 24上比规模大得多的Gemma3-27B高出2.6%,在AIME 25上高出0.6%,在Livecodebench上高出3.0%。这些结果凸显了状态空间模型作为基于注意力的架构的高效且可扩展替代方案,在高容量推理方面的潜力。