The Vision Transformer (ViT) leverages the Transformer's encoder to capture global information by dividing images into patches and achieves superior performance across various computer vision tasks. However, the self-attention mechanism of ViT captures the global context from the outset, overlooking the inherent relationships between neighboring pixels in images or videos. Transformers mainly focus on global information while ignoring the fine-grained local details. Consequently, ViT lacks inductive bias during image or video dataset training. In contrast, convolutional neural networks (CNNs), with their reliance on local filters, possess an inherent inductive bias, making them more efficient and quicker to converge than ViT with less data. In this paper, we present a lightweight Depth-Wise Convolution module as a shortcut in ViT models, bypassing entire Transformer blocks to ensure the models capture both local and global information with minimal overhead. Additionally, we introduce two architecture variants, allowing the Depth-Wise Convolution modules to be applied to multiple Transformer blocks for parameter savings, and incorporating independent parallel Depth-Wise Convolution modules with different kernels to enhance the acquisition of local information. The proposed approach significantly boosts the performance of ViT models on image classification, object detection and instance segmentation by a large margin, especially on small datasets, as evaluated on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet for image classification, and COCO for object detection and instance segmentation. The source code can be accessed at https://github.com/ZTX-100/Efficient_ViT_with_DW.
翻译:视觉Transformer(ViT)通过将图像划分为补丁并利用Transformer编码器捕获全局信息,在各种计算机视觉任务中实现了卓越性能。然而,ViT的自注意力机制从一开始就捕获全局上下文,忽略了图像或视频中相邻像素之间的固有关系。Transformer主要关注全局信息,同时忽略了细粒度的局部细节。因此,ViT在图像或视频数据集训练期间缺乏归纳偏置。相比之下,卷积神经网络(CNNs)依赖局部滤波器,具有固有的归纳偏置,使其在数据较少时比ViT更高效且收敛更快。本文提出了一种轻量级深度可分离卷积模块作为ViT模型中的捷径,绕过整个Transformer块,确保模型以最小开销同时捕获局部和全局信息。此外,我们引入了两种架构变体:允许深度可分离卷积模块应用于多个Transformer块以节省参数,以及结合具有不同核的独立并行深度可分离卷积模块以增强局部信息的获取。通过在CIFAR-10、CIFAR-100、Tiny-ImageNet和ImageNet上进行图像分类评估,以及在COCO上进行目标检测和实例分割评估,所提出的方法显著提升了ViT模型在图像分类、目标检测和实例分割上的性能,尤其在小数据集上表现突出。源代码可在https://github.com/ZTX-100/Efficient_ViT_with_DW 访问。