Self-supervised pre-training has proven highly effective for many computer vision tasks, particularly when labelled data are scarce. In the context of Earth Observation (EO), foundation models and various other Vision Transformer (ViT)-based approaches have been successfully applied for transfer learning to downstream tasks. However, it remains unclear under which conditions pre-trained models offer significant advantages over training from scratch. In this study, we investigate the effectiveness of pre-training ViT-based Masked Autoencoders (MAE) for downstream EO tasks, focusing on reconstruction, segmentation, and classification. We consider two large ViT-based MAE pre-trained models: a foundation model (Prithvi) and SatMAE. We evaluate Prithvi on reconstruction and segmentation-based downstream tasks, and for SatMAE we assess its performance on a classification downstream task. Our findings suggest that pre-training is particularly beneficial when the fine-tuning task closely resembles the pre-training task, e.g. reconstruction. In contrast, for tasks such as segmentation or classification, training from scratch with specific hyperparameter adjustments proved to be equally or more effective.
翻译:自监督预训练已被证明对许多计算机视觉任务非常有效,尤其是在标注数据稀缺的情况下。在地球观测(EO)领域,基础模型及其他各类基于视觉Transformer(ViT)的方法已成功应用于下游任务的迁移学习。然而,目前尚不清楚在何种条件下,预训练模型相较于从头训练能提供显著优势。本研究探讨了基于ViT的掩码自编码器(MAE)预训练对下游EO任务的有效性,重点关注重建、分割和分类任务。我们考察了两个大型基于ViT的MAE预训练模型:基础模型(Prithvi)和SatMAE。我们在基于重建和分割的下游任务上评估Prithvi,并在分类下游任务上评估SatMAE的性能。研究结果表明,当微调任务与预训练任务高度相似时(例如重建任务),预训练尤为有益。相比之下,对于分割或分类等任务,通过特定超参数调整进行从头训练被证明具有同等或更优的效果。