Sharpness-aware minimization (SAM) is known to improve the generalization performance of neural networks. However, it is not widely used in real-world applications yet due to its expensive model perturbation cost. A few variants of SAM have been proposed to tackle such an issue, but they commonly do not alleviate the cost noticeably. In this paper, we propose a lightweight layer-wise gradient norm penalizing method that tackles the expensive computational cost of SAM while maintaining its superior generalization performance. Our study empirically proves that the gradient norm of the whole model can be effectively suppressed by penalizing the gradient norm of only a few critical layers. We also theoretically show that such a partial model perturbation does not harm the convergence rate of SAM, allowing them to be safely adapted in real-world applications. To demonstrate the efficacy of the proposed method, we perform extensive experiments comparing the proposed method to mini-batch SGD and the conventional SAM using representative computer vision and language modeling benchmarks.
翻译:锐度感知最小化(SAM)被证实能够提升神经网络的泛化性能。然而,由于模型扰动成本高昂,其在实际应用中尚未得到广泛采用。已有若干SAM变体被提出以解决此问题,但它们通常未能显著降低计算成本。本文提出一种轻量级的层自适应梯度范数惩罚方法,旨在解决SAM高昂的计算成本,同时保持其优异的泛化性能。我们的研究通过实验证明,仅惩罚少数关键层的梯度范数即可有效抑制整个模型的梯度范数。我们还从理论上证明,这种部分模型扰动不会损害SAM的收敛速率,从而使其能够安全地应用于实际场景。为验证所提方法的有效性,我们进行了大量实验,将所提方法与mini-batch SGD及传统SAM在代表性的计算机视觉和语言建模基准上进行了比较。