With the rapid development of intelligent transportation systems and the popularity of smart city infrastructure, Vehicle Re-ID technology has become an important research field. The vehicle Re-ID task faces an important challenge, which is the high similarity between different vehicles. Existing methods use additional detection or segmentation models to extract differentiated local features. However, these methods either rely on additional annotations or greatly increase the computational cost. Using attention mechanism to capture global and local features is crucial to solve the challenge of high similarity between classes in vehicle Re-ID tasks. In this paper, we propose LKA-ReID with large kernel attention. Specifically, the large kernel attention (LKA) utilizes the advantages of self-attention and also benefits from the advantages of convolution, which can extract the global and local features of the vehicle more comprehensively. We also introduce hybrid channel attention (HCA) combines channel attention with spatial information, so that the model can better focus on channels and feature regions, and ignore background and other disturbing information. Experiments on VeRi-776 dataset demonstrated the effectiveness of LKA-ReID, with mAP reaches 86.65% and Rank-1 reaches 98.03%.
翻译:随着智能交通系统的快速发展和智慧城市基础设施的普及,车辆重识别技术已成为一个重要的研究领域。车辆重识别任务面临着一个重要挑战,即不同车辆之间的高度相似性。现有方法通常使用额外的检测或分割模型来提取具有区分性的局部特征。然而,这些方法要么依赖于额外的标注信息,要么会显著增加计算成本。利用注意力机制捕获全局和局部特征,对于解决车辆重识别任务中类间高度相似性的挑战至关重要。本文提出了一种基于大核注意力的车辆重识别方法LKA-ReID。具体而言,大核注意力模块充分利用了自注意力的优势,同时受益于卷积操作的优点,能够更全面地提取车辆的全局与局部特征。我们还引入了混合通道注意力模块,该模块将通道注意力与空间信息相结合,使得模型能够更好地关注重要通道和特征区域,并忽略背景及其他干扰信息。在VeRi-776数据集上的实验验证了LKA-ReID的有效性,其mAP达到86.65%,Rank-1达到98.03%。