In recent years, semantic segmentation has become a pivotal tool in processing and interpreting satellite imagery. Yet, a prevalent limitation of supervised learning techniques remains the need for extensive manual annotations by experts. In this work, we explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks. The main idea is to learn the joint data manifold of images and labels, leveraging recent advancements in denoising diffusion probabilistic models. To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation. We find that the obtained pairs not only display high quality in fine-scale features but also ensure a wide sampling diversity. Both aspects are crucial for earth observation data, where semantic classes can vary severely in scale and occurrence frequency. We employ the novel data instances for downstream segmentation, as a form of data augmentation. In our experiments, we provide comparisons to prior works based on discriminative diffusion models or GANs. We demonstrate that integrating generated samples yields significant quantitative improvements for satellite semantic segmentation -- both compared to baselines and when training only on the original data.
翻译:摘要:近年来,语义分割已成为处理与解释卫星图像的关键工具。然而,监督学习方法的一个普遍限制仍在于需要专家进行大量手动标注。本研究探索了生成式图像扩散在解决地球观测任务中标注数据稀缺问题上的潜力。核心思想是学习图像与标签的联合数据流形,利用去噪扩散概率模型的最新进展。据我们所知,本研究首次实现了卫星分割中图像及其对应掩膜的同步生成。我们发现,所获得的图像-掩膜对不仅在精细尺度特征上展现出高质量,还确保了广泛的采样多样性。这两个方面对地球观测数据至关重要,因为其语义类别在尺度和出现频率上可能存在显著差异。我们将这些新生成的数据实例作为数据增强手段用于下游分割任务。实验中,我们与基于判别式扩散模型或生成对抗网络(GAN)的先前工作进行了对比。结果表明,与基线方法及仅使用原始数据训练的情况相比,整合生成样本能够显著提升卫星语义分割的量化性能。