We demonstrate generating high-dynamic range (HDR) images using the concerted action of multiple black-box, pre-trained low-dynamic range (LDR) image diffusion models. Common diffusion models are not HDR as, first, there is no sufficiently large HDR image dataset available to re-train them, and second, even if it was, re-training such models is impossible for most compute budgets. Instead, we seek inspiration from the HDR image capture literature that traditionally fuses sets of LDR images, called "brackets", to produce a single HDR image. We operate multiple denoising processes to generate multiple LDR brackets that together form a valid HDR result. To this end, we introduce an exposure consistency term into the diffusion process to couple the brackets such that they agree across the exposure range they share. We demonstrate HDR versions of state-of-the-art unconditional and conditional as well as restoration-type (LDR2HDR) generative modeling.
翻译:我们展示了通过多个黑盒预训练低动态范围图像扩散模型的协同作用来生成高动态范围图像。常见扩散模型无法实现HDR生成,原因在于:首先,缺乏足够大规模的HDR图像数据集用于重新训练;其次,即使存在这样的数据集,重新训练此类模型对大多数计算资源而言也不可行。为此,我们从传统HDR图像采集文献中获取灵感——通过融合多张称为“包围曝光”的LDR图像来生成单张HDR图像。我们通过运行多个去噪过程来生成多张LDR包围曝光图像,这些图像共同构成有效的HDR结果。为实现这一目标,我们在扩散过程中引入了曝光一致性约束项,使各包围曝光图像在共享曝光范围内保持协调一致。我们展示了该方法在无条件生成、条件生成以及修复型生成任务中的HDR版本实现。