The groundbreaking performance of transformers in Natural Language Processing (NLP) tasks has led to their replacement of traditional Convolutional Neural Networks (CNNs), owing to the efficiency and accuracy achieved through the self-attention mechanism. This success has inspired researchers to explore the use of transformers in computer vision tasks to attain enhanced long-term semantic awareness. Vision transformers (ViTs) have excelled in various computer vision tasks due to their superior ability to capture long-distance dependencies using the self-attention mechanism. Contemporary ViTs like Data Efficient Transformers (DeiT) can effectively learn both global semantic information and local texture information from images, achieving performance comparable to traditional CNNs. However, their impressive performance comes with a high computational cost due to very large number of parameters, hindering their deployment on devices with limited resources like smartphones, cameras, drones etc. Additionally, ViTs require a large amount of data for training to achieve performance comparable to benchmark CNN models. Therefore, we identified two key challenges in deploying ViTs on smaller form factor devices: the high computational requirements of large models and the need for extensive training data. As a solution to these challenges, we propose compressing large ViT models using Knowledge Distillation (KD), which is implemented data-free to circumvent limitations related to data availability. Additionally, we conducted experiments on object detection within the same environment in addition to classification tasks. Based on our analysis, we found that datafree knowledge distillation is an effective method to overcome both issues, enabling the deployment of ViTs on less resourceconstrained devices.
翻译:Transformer在自然语言处理任务中的突破性表现,得益于自注意力机制带来的效率与精度优势,使其逐步取代了传统的卷积神经网络。这一成功激励研究者探索将Transformer应用于计算机视觉任务,以获得更强的长程语义感知能力。视觉Transformer凭借自注意力机制在捕捉长距离依赖关系方面的卓越能力,已在多种计算机视觉任务中展现出优异性能。当代视觉Transformer模型(如Data Efficient Transformers)能够同时从图像中学习全局语义信息与局部纹理特征,其性能已与传统卷积神经网络相当。然而,这类模型卓越性能的背后是海量参数带来的高昂计算代价,这阻碍了其在智能手机、相机、无人机等资源受限设备上的部署。此外,视觉Transformer需要大量训练数据才能达到基准卷积神经网络模型的性能水平。因此,我们识别出在小型化设备上部署视觉Transformer面临的两大核心挑战:大型模型的高计算需求与对大规模训练数据的依赖。为应对这些挑战,我们提出采用知识蒸馏技术压缩大型视觉Transformer模型,并通过无数据实现方式规避数据可获性限制。除分类任务外,我们还在相同环境下进行了目标检测实验。分析表明,无数据知识蒸馏是同时解决上述问题的有效方法,能够推动视觉Transformer在资源受限设备上的部署。