In recent years, deep learning models have been successfully employed for augmenting low-resolution cosmological simulations with small-scale information, a task known as "super-resolution". So far, these cosmological super-resolution models have relied on generative adversarial networks (GANs), which can achieve highly realistic results, but suffer from various shortcomings (e.g. low sample diversity). We introduce denoising diffusion models as a powerful generative model for super-resolving cosmic large-scale structure predictions (as a first proof-of-concept in two dimensions). To obtain accurate results down to small scales, we develop a new "filter-boosted" training approach that redistributes the importance of different scales in the pixel-wise training objective. We demonstrate that our model not only produces convincing super-resolution images and power spectra consistent at the percent level, but is also able to reproduce the diversity of small-scale features consistent with a given low-resolution simulation. This enables uncertainty quantification for the generated small-scale features, which is critical for the usefulness of such super-resolution models as a viable surrogate model for cosmic structure formation.
翻译:近年来,深度学习模型已成功应用于为低分辨率宇宙学模拟增强小尺度信息,这一任务被称为"超分辨率"。迄今为止,这些宇宙学超分辨率模型主要依赖于生成对抗网络(GANs),其虽能实现高度逼真的结果,但存在诸多缺陷(例如样本多样性不足)。本文首次引入去噪扩散模型作为一种强大的生成模型,用于对宇宙大尺度结构预测进行超分辨率处理(以二维场景作为初步概念验证)。为获得直至小尺度的精确结果,我们开发了一种新颖的"滤波增强"训练方法,该方法重新分配了逐像素训练目标中不同尺度的重要性权重。我们证明,该模型不仅能生成具有说服力的超分辨率图像及功率谱(一致性达百分比水平),还能复现与给定低分辨率模拟相符的小尺度特征多样性。这使生成的小尺度特征具备不确定性量化能力,对于此类超分辨率模型作为宇宙结构形成有效代理模型的实用性至关重要。