Model compression is a crucial part of deploying neural networks (NNs), especially when the memory and storage of computing devices are limited in many applications. This paper focuses on two model compression techniques: low-rank approximation and weight pruning in neural networks, which are very popular nowadays. However, training NN with low-rank approximation and weight pruning always suffers significant accuracy loss and convergence issues. In this paper, a holistic framework is proposed for model compression from a novel perspective of nonconvex optimization by designing an appropriate objective function. Then, we introduce NN-BCD, a block coordinate descent (BCD) algorithm to solve the nonconvex optimization. One advantage of our algorithm is that an efficient iteration scheme can be derived with closed-form, which is gradient-free. Therefore, our algorithm will not suffer from vanishing/exploding gradient problems. Furthermore, with the Kurdyka-{\L}ojasiewicz (K{\L}) property of our objective function, we show that our algorithm globally converges to a critical point at the rate of O(1/k), where k denotes the number of iterations. Lastly, extensive experiments with tensor train decomposition and weight pruning demonstrate the efficiency and superior performance of the proposed framework. Our code implementation is available at https://github.com/ChenyangLi-97/NN-BCD
翻译:模型压缩是部署神经网络(NNs)的关键环节,尤其在许多应用中计算设备的存储容量受限时。本文聚焦于两种当前极为流行的模型压缩技术:神经网络中的低秩近似与权重剪枝。然而,采用低秩近似与权重剪枝训练神经网络常面临显著的精度损失与收敛问题。本文从非凸优化的新颖视角出发,通过设计合适的目标函数,提出了一个模型压缩的整体框架。随后,我们引入NN-BCD——一种块坐标下降(BCD)算法来求解该非凸优化问题。我们算法的一个优势在于能够推导出具有闭式解的高效迭代方案,且该方案无需梯度计算。因此,我们的算法不会遭遇梯度消失或爆炸问题。此外,基于目标函数的Kurdyka-{\L}ojasiewicz(K{\L}})性质,我们证明了算法以O(1/k)的速率全局收敛至一个临界点,其中k表示迭代次数。最后,通过张量链分解与权重剪枝的大量实验,验证了所提框架的高效性与优越性能。我们的代码实现发布于https://github.com/ChenyangLi-97/NN-BCD。