While large language models (LLMs) have emerged as a significant advancement in artificial intelligence, the hardware and computational costs for training LLMs are also significantly burdensome. Among the state-of-the-art optimizers, AdamW relies on diagonal curvature estimates and ignores structural properties, while Muon applies global spectral normalization at the expense of losing curvature information. In this study, we restriked manifold optimization methods for training LLMs, which may address both optimizers' limitations, while conventional manifold optimization methods have been largely overlooked due to the poor performance in large-scale model optimization. By innovatively projecting the momentum onto the tangent space of model parameters and constraining it on a rotational Oblique manifold, we propose a novel, powerful, and efficient optimizer **Mano** that is the first to bridge the performance gap between manifold optimization and modern optimizers. Extensive experiments on the LLaMA and Qwen3 models demonstrate that Mano consistently and significantly outperforms AdamW and Muon even with less memory consumption and computational complexity, respectively, suggesting an expanded Pareto frontier in terms of space and time efficiency.
翻译:尽管大语言模型已成为人工智能领域的重大进展,但其训练所需的硬件与计算成本亦构成显著负担。在现有最先进的优化器中,AdamW依赖于对角曲率估计而忽略了结构特性,而Muon虽应用全局谱归一化却以损失曲率信息为代价。本研究重振了用于大语言模型训练的流形优化方法,该方法有望同时解决这两种优化器的局限性——而传统流形优化方法因在大规模模型优化中表现不佳长期被忽视。通过创新性地将动量投影至模型参数的切空间并将其约束于旋转斜流形,我们提出了一种新颖、强大且高效的优化器**Mano**,它首次弥合了流形优化与现代优化器之间的性能差距。在LLaMA和Qwen3模型上的大量实验表明,Mano在分别降低内存消耗与计算复杂度的同时,仍持续显著优于AdamW与Muon,这标志着在时空效率方面拓展了帕累托前沿。