Modern optimizers like Adam and Muon are central to training large language models, but their reliance on first- and second-order momenta introduces significant memory overhead, which constrains scalability and computational efficiency. In this work, we reframe the exponential moving average (EMA) used in these momenta as the training of a linear regressor via online gradient flow. Building on this equivalence, we introduce LoRA-Pre, a novel low-rank optimizer designed for efficient pre-training. Specifically, LoRA-Pre reduces the optimizer's memory footprint by decomposing the full momentum matrix into a compact low-rank subspace within the online linear learner, thereby maintaining optimization performance while improving memory efficiency. We empirically validate LoRA-Pre's efficacy by pre-training models from the Llama architecture family, scaling from 60M to 1B parameters. LoRA-Pre achieves the highest performance across all model sizes. Notably, LoRA-Pre demonstrates remarkable rank efficiency, achieving comparable or superior results using only 1/8 the rank of baseline methods. Beyond pre-training, we evaluate LoRA-Pre's effectiveness in fine-tuning scenarios. With the same rank, LoRA-Pre consistently outperforms all efficient fine-tuning baselines. Specifically, compared to standard LoRA, LoRA-Pre achieves substantial improvements of 3.14 points on Llama-3.1-8B and 6.17 points on Llama-2-7B, validating our approach's effectiveness across both pre-training and fine-tuning paradigms. Our code is publicly available at https://github.com/mrflogs/LoRA-Pre.
翻译:以Adam和Muon为代表的现代优化器是训练大语言模型的核心组件,但其对一阶和二阶动量的依赖引入了显著的内存开销,从而制约了可扩展性与计算效率。本研究将此类动量中使用的指数移动平均(EMA)重新阐释为通过在线梯度流训练线性回归器的过程。基于这一等价关系,我们提出了LoRA-Pre——一种专为高效预训练设计的新型低秩优化器。具体而言,LoRA-Pre通过将完整的动量矩阵分解为在线线性学习器中的紧凑低秩子空间,显著降低了优化器的内存占用,在保持优化性能的同时提升了内存效率。我们通过对Llama架构系列模型(参数量从6000万到10亿)进行预训练,实证验证了LoRA-Pre的有效性。在所有模型规模上,LoRA-Pre均取得了最优性能。值得注意的是,LoRA-Pre展现出卓越的秩效率:仅使用基线方法1/8的秩即可获得相当或更优的结果。除预训练外,我们还评估了LoRA-Pre在微调场景中的效能。在相同秩配置下,LoRA-Pre持续超越所有高效微调基线。具体而言,相较于标准LoRA,LoRA-Pre在Llama-3.1-8B上实现了3.14分的显著提升,在Llama-2-7B上提升了6.17分,这验证了我们的方法在预训练与微调范式中的普适有效性。代码已公开于https://github.com/mrflogs/LoRA-Pre。