Attention mechanisms have significantly advanced visual models by capturing global context effectively. However, their reliance on large-scale datasets and substantial computational resources poses challenges in data-scarce and resource-constrained scenarios. Moreover, traditional self-attention mechanisms lack inherent spatial inductive biases, making them suboptimal for modeling local features critical to tasks involving smaller datasets. In this work, we introduce Large Kernel Convolutional Attention (LKCA), a novel formulation that reinterprets attention operations as a single large-kernel convolution. This design unifies the strengths of convolutional architectures locality and translation invariance with the global context modeling capabilities of self-attention. By embedding these properties into a computationally efficient framework, LKCA addresses key limitations of traditional attention mechanisms. The proposed LKCA achieves competitive performance across various visual tasks, particularly in data-constrained settings. Experimental results on CIFAR-10, CIFAR-100, SVHN, and Tiny-ImageNet demonstrate its ability to excel in image classification, outperforming conventional attention mechanisms and vision transformers in compact model settings. These findings highlight the effectiveness of LKCA in bridging local and global feature modeling, offering a practical and robust solution for real-world applications with limited data and resources.
翻译:注意力机制通过有效捕获全局上下文,显著推动了视觉模型的发展。然而,其对大规模数据集和大量计算资源的依赖,在数据稀缺和资源受限的场景中带来了挑战。此外,传统的自注意力机制缺乏固有的空间归纳偏置,使其在建模局部特征方面表现欠佳,而局部特征对于涉及较小数据集的任务至关重要。本文提出大核卷积注意力(Large Kernel Convolutional Attention, LKCA),这是一种新颖的建模方式,将注意力操作重新解释为单一的大核卷积。该设计将卷积架构的局部性与平移不变性优势与自注意力的全局上下文建模能力相统一。通过将这些特性嵌入到计算高效的框架中,LKCA解决了传统注意力机制的关键局限。所提出的LKCA在各种视觉任务中取得了有竞争力的性能,特别是在数据受限的场景下。在CIFAR-10、CIFAR-100、SVHN和Tiny-ImageNet上的实验结果表明,其在图像分类任务中表现优异,在紧凑模型设置下超越了传统注意力机制和视觉Transformer。这些发现凸显了LKCA在桥接局部与全局特征建模方面的有效性,为数据与资源有限的现实应用提供了实用且鲁棒的解决方案。