While the pretraining of Foundation Models (FMs) for remote sensing (RS) imagery is on the rise, models remain restricted to a few hundred million parameters. Scaling models to billions of parameters has been shown to yield unprecedented benefits including emergent abilities, but requires data scaling and computing resources typically not available outside industry R&D labs. In this work, we pair high-performance computing resources including Frontier supercomputer, America's first exascale system, and high-resolution optical RS data to pretrain billion-scale FMs. Our study assesses performance of different pretrained variants of vision Transformers across image classification, semantic segmentation and object detection benchmarks, which highlight the importance of data scaling for effective model scaling. Moreover, we discuss construction of a novel TIU pretraining dataset, model initialization, with data and pretrained models intended for public release. By discussing technical challenges and details often lacking in the related literature, this work is intended to offer best practices to the geospatial community toward efficient training and benchmarking of larger FMs.
翻译:尽管面向遥感影像的基础模型预训练正在兴起,现有模型仍局限于数亿参数规模。将模型扩展至十亿参数级别已被证明能带来包括涌现能力在内的前所未有的优势,但这通常需要数据规模和计算资源,而这些资源在工业研发实验室之外往往难以获得。本研究结合了包括美国首个百亿亿次计算系统Frontier超级计算机在内的高性能计算资源,以及高分辨率光学遥感数据,对十亿级规模的基础模型进行预训练。我们评估了不同预训练视觉Transformer变体在图像分类、语义分割和目标检测基准测试中的性能,结果凸显了数据扩展对于有效模型扩展的重要性。此外,我们讨论了新颖的TIU预训练数据集的构建、模型初始化策略,并计划公开相关数据与预训练模型。通过探讨相关文献中常缺失的技术挑战与细节,本研究旨在为地理空间社区提供高效训练和评估更大规模基础模型的最佳实践指南。