The crucial role of convolutional models, both as standalone vision models and backbones in foundation models, necessitates effective acceleration techniques. This paper proposes a novel method to learn semi-structured sparsity patterns for convolution kernels in the form of maskings enabling the utilization of readily available hardware accelerations. The approach accelerates convolutional models more than two-fold during inference without decreasing model performance. At the same time, the original model weights and structure remain unchanged keeping the model thus easily updatable. Beyond the immediate practical use, the effect of maskings on prediction is easily quantifiable. Therefore, guarantees on model predictions under maskings are derived showing stability bounds for learned maskings even after updating the original underlying model.
翻译:卷积模型作为独立的视觉模型及基础模型中的骨干网络,其关键作用使得有效的加速技术成为必要。本文提出一种新颖方法,通过掩码形式学习卷积核的半结构化稀疏模式,从而能够利用现成的硬件加速方案。该方法在推理阶段将卷积模型加速两倍以上,同时不降低模型性能。原始模型权重与结构保持不变,使得模型易于更新。除了直接的实际应用价值外,掩码对预测的影响易于量化。因此,本文推导出掩码条件下模型预测的保证性结论,证明即使原始基础模型更新后,已学习的掩码仍能保持稳定性边界。