In this work, we improved the analysis of the running time of SparseGPT [Frantar, Alistarh ICML 2023] from $O(d^{3})$ to $O(d^{\omega} + d^{2+a+o(1)} + d^{1+\omega(1,1,a)-a})$ for any $a \in [0, 1]$, where $\omega$ is the exponent of matrix multiplication. In particular, for the current $\omega \approx 2.371$ [Alman, Duan, Williams, Xu, Xu, Zhou 2024], our running time boils down to $O(d^{2.53})$. This running time is due to the analysis of the lazy update behavior in iterative maintenance problems such as [Deng, Song, Weinstein 2022; Brand, Song, Zhou ICML 2024].
翻译:本文改进了 SparseGPT [Frantar, Alistarh ICML 2023] 运行时间的分析,将其从 $O(d^{3})$ 提升至 $O(d^{\omega} + d^{2+a+o(1)} + d^{1+\omega(1,1,a)-a})$,其中 $a \in [0, 1]$,$\omega$ 为矩阵乘法的指数。特别地,对于当前 $\omega \approx 2.371$ [Alman, Duan, Williams, Xu, Xu, Zhou 2024] 的情况,我们的运行时间可简化为 $O(d^{2.53})$。这一运行时间得益于对迭代维护问题(如 [Deng, Song, Weinstein 2022; Brand, Song, Zhou ICML 2024])中惰性更新行为的分析。