Self-Supervised Learning (SSL) enables us to pre-train foundation models without costly labeled data. Among SSL methods, Contrastive Learning (CL) methods are better at obtaining accurate semantic representations in noise interference. However, due to the significant domain gap, while CL methods have achieved great success in many computer vision tasks, they still require specific adaptation for Remote Sensing (RS) images. To this end, we present a novel self-supervised method called PerA, which produces all-purpose RS features through semantically Perfectly Aligned sample pairs. Specifically, PerA obtains features from sampled views by applying spatially disjoint masks to augmented images rather than random cropping. Our framework provides high-quality features by ensuring consistency between teacher and student and predicting learnable mask tokens. Compared to previous contrastive methods, our method demonstrates higher memory efficiency and can be trained with larger batches due to its sparse inputs. Additionally, the proposed method demonstrates remarkable adaptability to uncurated RS data and reduce the impact of the potential semantic inconsistency. We also collect an unlabeled pre-training dataset, which contains about 5 million RS images. We conducted experiments on multiple downstream task datasets and achieved performance comparable to previous state-of-the-art methods with a limited model scale, demonstrating the effectiveness of our approach. We hope this work will contribute to practical remote sensing interpretation works.
翻译:自监督学习使我们能够在无需昂贵标注数据的情况下预训练基础模型。在自监督学习方法中,对比学习方法更擅长在噪声干扰下获取准确的语义表征。然而,由于显著的领域差异,尽管对比学习方法已在许多计算机视觉任务中取得巨大成功,其在遥感图像处理中仍需专门的适配。为此,我们提出了一种名为PerA的新型自监督方法,该方法通过语义完美对齐的样本对生成通用的遥感特征。具体而言,PerA通过对增强图像施加空间不相交的掩码而非随机裁剪,从采样视图中获取特征。我们的框架通过确保教师模型与学生模型之间的一致性并预测可学习的掩码标记来提供高质量特征。与先前的对比方法相比,本方法因输入稀疏性展现出更高的内存效率,并能以更大批次进行训练。此外,所提方法对未筛选的遥感数据表现出卓越的适应性,并降低了潜在语义不一致性的影响。我们还收集了一个包含约500万张遥感图像的无标注预训练数据集。我们在多个下游任务数据集上进行了实验,在有限模型规模下取得了与先前最先进方法相当的性能,证明了本方法的有效性。我们希望这项工作能为实际遥感解译工作做出贡献。