We introduce a new benchmark designed to advance the development of general-purpose, large-scale vision-language models for remote sensing images. Although several vision-language datasets in remote sensing have been proposed to pursue this goal, existing datasets are typically tailored to single tasks, lack detailed object information, or suffer from inadequate quality control. Exploring these improvement opportunities, we present a Versatile vision-language Benchmark for Remote Sensing image understanding, termed VRSBench. This benchmark comprises 29,614 images, with 29,614 human-verified detailed captions, 52,472 object references, and 123,221 question-answer pairs. It facilitates the training and evaluation of vision-language models across a broad spectrum of remote sensing image understanding tasks. We further evaluated state-of-the-art models on this benchmark for three vision-language tasks: image captioning, visual grounding, and visual question answering. Our work aims to significantly contribute to the development of advanced vision-language models in the field of remote sensing. The data and code can be accessed at https://github.com/lx709/VRSBench.
翻译:我们引入了一个新的基准数据集,旨在推动面向遥感图像的通用大规模视觉-语言模型的发展。尽管已有若干遥感视觉-语言数据集为实现这一目标而被提出,但现有数据集通常仅针对单一任务设计、缺乏详细物体信息或存在质量控制不足的问题。基于这些改进空间,我们提出了一个用于遥感图像理解的通用视觉-语言基准数据集,命名为VRSBench。该基准包含29,614幅图像,配有29,614条经人工核验的详细描述、52,472个物体指代标注以及123,221组问答对。它能够支持视觉-语言模型在广泛遥感图像理解任务上的训练与评估。我们进一步在该基准上评估了前沿模型在图像描述生成、视觉定位和视觉问答三项视觉-语言任务上的性能。本工作旨在为遥感领域先进视觉-语言模型的发展做出重要贡献。数据集与代码可通过 https://github.com/lx709/VRSBench 获取。