Model binarization has made significant progress in enabling real-time and energy-efficient computation for convolutional neural networks (CNN), offering a potential solution to the deployment challenges faced by Vision Transformers (ViTs) on edge devices. However, due to the structural differences between CNN and Transformer architectures, simply applying binary CNN strategies to the ViT models will lead to a significant performance drop. To tackle this challenge, we propose BHViT, a binarization-friendly hybrid ViT architecture and its full binarization model with the guidance of three important observations. Initially, BHViT utilizes the local information interaction and hierarchical feature aggregation technique from coarse to fine levels to address redundant computations stemming from excessive tokens. Then, a novel module based on shift operations is proposed to enhance the performance of the binary Multilayer Perceptron (MLP) module without significantly increasing computational overhead. In addition, an innovative attention matrix binarization method based on quantization decomposition is proposed to evaluate the token's importance in the binarized attention matrix. Finally, we propose a regularization loss to address the inadequate optimization caused by the incompatibility between the weight oscillation in the binary layers and the Adam Optimizer. Extensive experimental results demonstrate that our proposed algorithm achieves SOTA performance among binary ViT methods.
翻译:模型二值化在实现卷积神经网络(CNN)的实时与高能效计算方面取得了显著进展,为Vision Transformers(ViTs)在边缘设备上的部署挑战提供了潜在解决方案。然而,由于CNN与Transformer架构间的结构差异,直接将二值化CNN策略应用于ViT模型会导致显著的性能下降。为应对这一挑战,我们基于三项重要观察,提出了BHViT——一种二值化友好的混合ViT架构及其全二值化模型。首先,BHViT利用局部信息交互与从粗到细的层次化特征聚合技术,以解决因冗余token产生的过量计算。随后,提出一种基于移位操作的新型模块,在不显著增加计算开销的前提下提升二值化多层感知机(MLP)模块的性能。此外,提出一种基于量化分解的创新注意力矩阵二值化方法,以评估token在二值化注意力矩阵中的重要性。最后,我们提出一种正则化损失,以解决二值化层权重振荡与Adam优化器之间的不兼容性所导致的优化不足问题。大量实验结果表明,所提算法在二值化ViT方法中达到了SOTA性能。