Recent research has focused on weight sparsity in neural network training to reduce FLOPs, aiming for improved efficiency (test accuracy w.r.t training FLOPs). However, sparse weight training often sacrifices accuracy, requiring extended training schedules to attain the accuracy of dense models. In contrast, our approach, Sparse Iso-FLOP Transformations (Sparse-IFT), uses sparsity to improve accuracy while maintaining dense model FLOPs. Using a single hyperparameter (i.e., sparsity level), Sparse-IFTs efficiently replace dense layers, expanding the search space for optimal sparse masks. In addition, dynamic sparse training with Sparse-IFT models effectively navigates this larger sparse mask-weight space, which is evidenced by a spectral analysis using Ramanujan graph properties. Our study reveals a robust correlation among mask topology, weights, and final performance. Notably, without adjusting hyperparameters, replacing dense layers with Sparse-IFT yields significant improvements, such as a +3.5% boost for ResNet-18 on ImageNet and +0.9% for GPT-3 Small on the Open LLM leaderboard. To our knowledge, this is the first work to demonstrate the use of sparsity for improving the accuracy of dense models through a simple-to-use set of sparse transformations. Code is available at: https://github.com/CerebrasResearch/Sparse-IFT.
翻译:近期研究聚焦于神经网络训练中的权重稀疏性以减少浮点运算数(FLOPs),旨在提升效率(即测试准确率相对于训练FLOPs的比值)。然而,稀疏权重训练常以牺牲准确率为代价,需通过延长训练周期才能达到稠密模型的准确率。与之相反,本文提出的等值浮点运算稀疏变换(Sparse-IFT)方法,在保持稠密模型FLOPs不变的前提下,利用稀疏性提升准确率。通过单一超参数(即稀疏度),Sparse-IFT可高效替代稠密层,扩展最优稀疏掩码的搜索空间。此外,基于Sparse-IFT模型的动态稀疏训练能有效探索该更大的稀疏掩码-权重空间,这一特性由基于Ramanujan图性质的谱分析所证实。我们的研究揭示了掩码拓扑结构、权重与最终性能间的强相关性。值得注意的是,在无需调整超参数的情况下,将稠密层替换为Sparse-IFT即可带来显著提升:例如在ImageNet上ResNet-18准确率提升+3.5%,在Open LLM排行榜上GPT-3 Small提升+0.9%。据我们所知,这是首个通过易用的稀疏变换集合,利用稀疏性提升稠密模型准确率的工作。代码开源于:https://github.com/CerebrasResearch/Sparse-IFT。