As the potential of foundation models in visual tasks has garnered significant attention, pretraining these models before downstream tasks has become a crucial step. The three key factors in pretraining foundation models are the pretraining method, the size of the pretraining dataset, and the number of model parameters. Recently, research in the remote sensing field has focused primarily on the pretraining method and the size of the dataset, with limited emphasis on the number of model parameters. This paper addresses this gap by examining the effect of increasing the number of model parameters on the performance of foundation models in downstream tasks such as rotated object detection and semantic segmentation. We pretrained foundation models with varying numbers of parameters, including 86M, 605.26M, 1.3B, and 2.4B, to determine whether performance in downstream tasks improved with an increase in parameters. To the best of our knowledge, this is the first billion-scale foundation model in the remote sensing field. Furthermore, we propose an effective method for scaling up and fine-tuning a vision transformer in the remote sensing field. To evaluate general performance in downstream tasks, we employed the DOTA v2.0 and DIOR-R benchmark datasets for rotated object detection, and the Potsdam and LoveDA datasets for semantic segmentation. Experimental results demonstrated that, across all benchmark datasets and downstream tasks, the performance of the foundation models and data efficiency improved as the number of parameters increased. Moreover, our models achieve the state-of-the-art performance on several datasets including DIOR-R, Postdam, and LoveDA.
翻译:随着基础模型在视觉任务中的潜力受到广泛关注,在下游任务之前预训练这些模型已成为关键步骤。预训练基础模型的三个核心要素是预训练方法、预训练数据集规模以及模型参数量。近年来,遥感领域的研究主要集中在预训练方法和数据集规模上,而对模型参数量的关注有限。本文通过探究增加模型参数量对旋转目标检测和语义分割等下游任务中基础模型性能的影响来填补这一空白。我们预训练了参数量分别为86M、605.26M、1.3B和2.4B的基础模型,以验证下游任务性能是否随参数量增加而提升。据我们所知,这是遥感领域首个十亿级基础模型。此外,我们提出了一种在遥感领域有效扩展和微调视觉Transformer的方法。为评估下游任务的泛化性能,我们采用DOTA v2.0和DIOR-R基准数据集进行旋转目标检测,采用Potsdam和LoveDA数据集进行语义分割。实验结果表明,在所有基准数据集和下游任务中,基础模型的性能和数据效率均随参数量的增加而提升。此外,我们的模型在DIOR-R、Potsdam和LoveDA等多个数据集上达到了最先进的性能。