Deep learning models, despite their impressive achievements, suffer from high computational costs and memory requirements, limiting their usability in resource-constrained environments. Sparse neural networks significantly alleviate these constraints by dramatically reducing parameter count and computational overhead. However, existing sparse training methods often experience chaotic and noisy gradient signals, severely hindering convergence and generalization performance, particularly at high sparsity levels. To tackle this critical challenge, we propose Zero-Order Sharpness-Aware Minimization (ZO-SAM), a novel optimization framework that strategically integrates zero-order optimization within the SAM approach. Unlike traditional SAM, ZO-SAM requires only a single backpropagation step during perturbation, selectively utilizing zero-order gradient estimations. This innovative approach reduces the backpropagation computational cost by half compared to conventional SAM, significantly lowering gradient variance and effectively eliminating associated computational overhead. By harnessing SAM's capacity for identifying flat minima, ZO-SAM stabilizes the training process and accelerates convergence. These efficiency gains are particularly important in sparse training scenarios, where computational cost is the primary bottleneck that limits the practicality of SAM. Moreover, models trained with ZO-SAM exhibit improved robustness under distribution shift, further broadening its practicality in real-world deployments.
翻译:深度学习模型尽管取得了令人瞩目的成就,但其高昂的计算成本和内存需求限制了其在资源受限环境中的可用性。稀疏神经网络通过大幅减少参数量和计算开销,显著缓解了这些限制。然而,现有的稀疏训练方法常常面临混沌且噪声严重的梯度信号,严重阻碍了收敛性和泛化性能,尤其是在高稀疏度情况下。为应对这一关键挑战,我们提出了无序锐度感知最小化(ZO-SAM),这是一种新颖的优化框架,策略性地将无序优化整合到SAM方法中。与传统SAM不同,ZO-SAM在扰动过程中仅需单次反向传播步骤,并选择性地利用无序梯度估计。相较于传统SAM,这一创新方法将反向传播计算成本降低了一半,显著减少了梯度方差并有效消除了相关计算开销。通过利用SAM识别平坦最小值的能力,ZO-SAM稳定了训练过程并加速了收敛。这些效率提升在稀疏训练场景中尤为重要,因为计算成本是限制SAM实用性的主要瓶颈。此外,采用ZO-SAM训练的模型在分布偏移下表现出更强的鲁棒性,进一步拓宽了其在实际部署中的实用性。