Foundation models have had a significant impact across various AI applications, enabling use cases that were previously impossible. Contrastive Visual Language Models (VLMs), in particular, have outperformed other techniques in many tasks. However, their prevalence in remote sensing (RS) is still limited, due to the scarcity of diverse remote-sensing visual-language datasets. In this work we introduce two novel image-caption datasets for training of remote sensing foundation models. The first dataset pairs aerial and satellite imagery with captions generated by Gemini using landmarks extracted from Google Maps. The second dataset utilizes public web images and their corresponding alt-text, filtered for the remote sensing domain, resulting in a diverse dataset with greater breadth in image styles and subject matter. These datasets are used to pre-train the MaMMUT~\citep{kuo2023mammutsimplearchitecturejoint} VLM architecture, resulting in state-of-the-art generalization performance in zero-shot cross-modal retrieval on well-known public benchmarks. Finally, we present our ongoing research to distill image-level knowledge gained in the VLM contrastive training procedure to enhance the model's localization ability. Specifically, we iteratively generate pseudo-labels for image regions based on the model's attention maps and use these labels for further training. To mitigate noisy attention maps and create robust segmentation masks, we introduce a novel attention-pooling mechanism called the Smooth-Attention-Operation.
翻译:基础模型已在各种人工智能应用中产生显著影响,实现了以往不可能的应用场景。对比式视觉语言模型(VLM)尤其在众多任务中超越了其他技术。然而,由于缺乏多样化的遥感视觉-语言数据集,其在遥感(RS)领域的普及仍然有限。本研究引入了两个新颖的图文数据集用于训练遥感基础模型。第一个数据集将航空与卫星影像与通过Gemini生成的描述配对,这些描述基于从Google Maps提取的地标信息生成。第二个数据集利用公共网络图像及其对应的替代文本,经遥感领域筛选后,形成了一个在图像风格和主题内容上更具广度与多样性的数据集。使用这些数据集对MaMMUT~\citep{kuo2023mammutsimplearchitecturejoint} VLM架构进行预训练,使其在知名公共基准测试的零样本跨模态检索任务中实现了最先进的泛化性能。最后,我们介绍了当前正在进行的研究,旨在蒸馏VLM对比训练过程中获得的图像级知识,以增强模型的定位能力。具体而言,我们基于模型的注意力图迭代生成图像区域的伪标签,并利用这些标签进行进一步训练。为缓解注意力图的噪声干扰并创建鲁棒的分割掩码,我们引入了一种称为平滑注意力操作的新型注意力池化机制。