Recent surge in large-scale generative models has spurred the development of vast fields in computer vision. In particular, text-to-image diffusion models have garnered widespread adoption across diverse domain due to their potential for high-fidelity image generation. Nonetheless, existing large-scale diffusion models are confined to generate images of up to 1K resolution, which is far from meeting the demands of contemporary commercial applications. Directly sampling higher-resolution images often yields results marred by artifacts such as object repetition and distorted shapes. Addressing the aforementioned issues typically necessitates training or fine-tuning models on higher resolution datasets. However, this undertaking poses a formidable challenge due to the difficulty in collecting large-scale high-resolution contents and substantial computational resources. While several preceding works have proposed alternatives, they often fail to produce convincing results. In this work, we probe the generative ability of diffusion models at higher resolution beyond its original capability and propose a novel progressive approach that fully utilizes generated low-resolution image to guide the generation of higher resolution image. Our method obviates the need for additional training or fine-tuning which significantly lowers the burden of computational costs. Extensive experiments and results validate the efficiency and efficacy of our method.
翻译:近期大规模生成模型的涌现极大地推动了计算机视觉领域的广泛发展。其中,文本到图像扩散模型因其实现高保真图像生成的潜力,已在多个领域获得广泛应用。然而,现有的大规模扩散模型仅限于生成最高1K分辨率的图像,远不能满足当前商业应用的需求。直接采样更高分辨率的图像通常会产生包含物体重复和形状扭曲等伪影的结果。解决上述问题通常需要在更高分辨率数据集上对模型进行训练或微调。但由于大规模高分辨率内容收集困难且需要大量计算资源,这一任务面临着巨大挑战。尽管先前已有若干工作提出了替代方案,但它们往往无法产生令人信服的结果。在本工作中,我们探究了扩散模型在超出其原始能力范围的高分辨率下的生成能力,并提出了一种新颖的渐进式方法,该方法充分利用已生成的低分辨率图像来引导更高分辨率图像的生成。我们的方法无需额外的训练或微调,从而显著降低了计算成本负担。大量实验和结果验证了我们方法的效率与有效性。