Memory-efficient backpropagation (MeBP) has enabled first-order fine-tuning of large language models (LLMs) on mobile devices with less than 1GB memory. However, MeBP requires backward computation through all transformer layers at every step, where weight decompression alone accounts for 32--42% of backward time. We propose Layer-Cyclic Selective Backpropagation (LCSB), which computes gradients for only a subset of layers per step. Our key insight is that residual connections guarantee gradient flow through identity paths, while AdamW momentum provides implicit updates for non-selected layers. We interpret LCSB as Block Coordinate Descent on the LoRA parameter space, providing theoretical justification for convergence. LCSB achieves up to 1.40$\times$ speedup with less than 2\% quality degradation across five models and three tasks. Surprisingly, in 4-bit quantized settings, LCSB exhibits superior stability: a 3B model that completely diverges under full backpropagation converges smoothly with LCSB, suggesting an implicit regularization effect from selective gradient computation.
翻译:内存高效反向传播(MeBP)使得在内存小于1GB的移动设备上对大型语言模型(LLM)进行一阶微调成为可能。然而,MeBP需要在每一步对所有Transformer层进行反向计算,其中仅权重解压缩就占反向传播时间的32%至42%。本文提出层循环选择性反向传播(LCSB),该方法在每一步仅计算部分层的梯度。我们的核心洞见在于:残差连接保证了梯度通过恒等路径的流动,而AdamW动量机制则为未选中的层提供了隐式更新。我们将LCSB解释为LoRA参数空间上的块坐标下降法,并为其收敛性提供了理论依据。在五种模型和三项任务中,LCSB实现了最高1.40倍的加速,且质量损失低于2%。值得注意的是,在4位量化设置下,LCSB表现出卓越的稳定性:一个在完整反向传播下完全发散的三百亿参数模型,在使用LCSB时能够平稳收敛,这表明选择性梯度计算具有隐式正则化效应。