Remote sensing imagery, despite its broad applications in helping achieve Sustainable Development Goals and tackle climate change, has not yet benefited from the recent advancements of versatile, task-agnostic vision language models (VLMs). A key reason is that the large-scale, semantically diverse image-text dataset required for developing VLMs is still absent for remote sensing images. Unlike natural images, remote sensing images and their associated text descriptions cannot be efficiently collected from the public Internet at scale. In this work, we bridge this gap by using geo-coordinates to automatically connect open, unlabeled remote sensing images with rich semantics covered in OpenStreetMap, and thus construct SkyScript, a comprehensive vision-language dataset for remote sensing images, comprising 2.6 million image-text pairs covering 29K distinct semantic tags. With continual pre-training on this dataset, we obtain a VLM that surpasses baseline models with a 6.2% average accuracy gain in zero-shot scene classification across seven benchmark datasets. It also demonstrates the ability of zero-shot transfer for fine-grained object attribute classification and cross-modal retrieval. We hope this dataset can support the advancement of VLMs for various multi-modal tasks in remote sensing, such as open-vocabulary classification, retrieval, captioning, and text-to-image synthesis.
翻译:遥感影像虽在助力实现可持续发展目标和应对气候变化方面具有广泛应用,却未能受益于近期多功能、任务无关视觉语言模型(VLM)的进展。关键原因在于:开发VLM所需的大规模、语义多样化的图像-文本数据集在遥感领域仍属空白。与自然图像不同,遥感影像及其相关文本描述无法从公共互联网高效规模化采集。本研究通过利用地理坐标自动连接开放、无标注的遥感影像与OpenStreetMap中覆盖的丰富语义,构建了SkyScript——一个全面的遥感影像视觉语言数据集,包含260万对图像-文本对,覆盖2.9万个不同语义标签。基于该数据集进行持续预训练后,我们获得的VLM在七个基准数据集上的零样本场景分类平均准确率提升6.2%,并展现出对细粒度目标属性分类和跨模态检索的零样本迁移能力。希望本数据集能推动遥感领域多模态任务(如开放词汇分类、检索、描述生成和文本到图像合成)的VLM发展。