Inducing and leveraging sparse activations during training and inference is a promising avenue for improving the computational efficiency of deep networks, which is increasingly important as network sizes continue to grow and their application becomes more widespread. Here we use the large width Gaussian process limit to analyze the behaviour, at random initialization, of nonlinear activations that induce sparsity in the hidden outputs. A previously unreported form of training instability is proven for arguably two of the most natural candidates for hidden layer sparsification; those being a shifted ReLU ($\phi(x)=\max(0, x-\tau)$ for $\tau\ge 0$) and soft thresholding ($\phi(x)=0$ for $|x|\le\tau$ and $x-\text{sign}(x)\tau$ for $|x|>\tau$). We show that this instability is overcome by clipping the nonlinear activation magnitude, at a level prescribed by the shape of the associated Gaussian process variance map. Numerical experiments verify the theory and show that the proposed magnitude clipped sparsifying activations can be trained with training and test fractional sparsity as high as 85\% while retaining close to full accuracy.
翻译:在训练和推理过程中引入并利用稀疏激活是提升深度网络计算效率的一种有前景的途径,随着网络规模持续增长及其应用日益普及,这一点愈发重要。本文利用宽网络高斯过程极限,分析随机初始化时在隐藏层输出中诱导稀疏性的非线性激活函数的行为。我们证明了两种最自然的隐藏层稀疏化候选激活函数——即移位ReLU($\phi(x)=\max(0, x-\tau)$,其中$\tau\ge 0$)和软阈值函数($\phi(x)=0$,当$|x|\le\tau$;$\phi(x)=x-\text{sign}(x)\tau$,当$|x|>\tau$)——存在一种此前未被报道的训练不稳定性。研究表明,通过裁剪非线性激活幅值(裁剪阈值由相应高斯过程方差图的形状确定)可克服这种不稳定性。数值实验验证了该理论,并表明所提出的幅值裁剪稀疏化激活函数在训练和测试中,稀疏率可高达85%,同时仍能保持接近全精度的准确率。