The recent advancement of generative foundational models has ushered in a new era of image generation in the realm of natural images, revolutionizing art design, entertainment, environment simulation, and beyond. Despite producing high-quality samples, existing methods are constrained to generating images of scenes at a limited scale. In this paper, we present MetaEarth, a generative foundation model that breaks the barrier by scaling image generation to a global level, exploring the creation of worldwide, multi-resolution, unbounded, and virtually limitless remote sensing images. In MetaEarth, we propose a resolution-guided self-cascading generative framework, which enables the generating of images at any region with a wide range of geographical resolutions. To achieve unbounded and arbitrary-sized image generation, we design a novel noise sampling strategy for denoising diffusion models by analyzing the generation conditions and initial noise. To train MetaEarth, we construct a large dataset comprising multi-resolution optical remote sensing images with geographical information. Experiments have demonstrated the powerful capabilities of our method in generating global-scale images. Additionally, the MetaEarth serves as a data engine that can provide high-quality and rich training data for downstream tasks. Our model opens up new possibilities for constructing generative world models by simulating Earth visuals from an innovative overhead perspective.
翻译:生成式基础模型的最新进展开启了自然图像领域图像生成的新纪元,彻底改变了艺术设计、娱乐、环境模拟等领域。尽管现有方法能够生成高质量样本,但其生成场景图像的尺度仍受限制。本文提出MetaEarth,这是一种生成式基础模型,通过将图像生成扩展至全球尺度,突破了现有局限,探索了生成全球范围、多分辨率、无边界且近乎无限的遥感图像。在MetaEarth中,我们提出了一种分辨率引导的自级联生成框架,该框架能够生成任意区域、具有广泛地理分辨率的图像。为实现无边界任意尺寸的图像生成,我们通过分析生成条件和初始噪声,为去噪扩散模型设计了一种新颖的噪声采样策略。为训练MetaEarth,我们构建了一个包含带地理信息的多分辨率光学遥感图像的大型数据集。实验证明,我们的方法在生成全球尺度图像方面具有强大能力。此外,MetaEarth可作为数据引擎,为下游任务提供高质量且丰富的训练数据。我们的模型通过从创新的俯视视角模拟地球视觉,为构建生成式世界模型开辟了新的可能性。