Recent surge in large-scale generative models has spurred the development of vast fields in computer vision. In particular, text-to-image diffusion models have garnered widespread adoption across diverse domain due to their potential for high-fidelity image generation. Nonetheless, existing large-scale diffusion models are confined to generate images of up to 1K resolution, which is far from meeting the demands of contemporary commercial applications. Directly sampling higher-resolution images often yields results marred by artifacts such as object repetition and distorted shapes. Addressing the aforementioned issues typically necessitates training or fine-tuning models on higher resolution datasets. However, this undertaking poses a formidable challenge due to the difficulty in collecting large-scale high-resolution contents and substantial computational resources. While several preceding works have proposed alternatives, they often fail to produce convincing results. In this work, we probe the generative ability of diffusion models at higher resolution beyond its original capability and propose a novel progressive approach that fully utilizes generated low-resolution image to guide the generation of higher resolution image. Our method obviates the need for additional training or fine-tuning which significantly lowers the burden of computational costs. Extensive experiments and results validate the efficiency and efficacy of our method. Project page: https://yhyun225.github.io/DiffuseHigh/
翻译:近期大规模生成模型的涌现推动了计算机视觉诸多领域的快速发展。特别是文本到图像的扩散模型,因其实现高保真图像生成的潜力,已在不同领域获得广泛采用。然而,现有的大规模扩散模型仅限于生成最高1K分辨率的图像,这远不能满足当代商业应用的需求。直接采样更高分辨率的图像通常会产生带有伪影的结果,例如物体重复和形状扭曲。解决上述问题通常需要在更高分辨率的数据集上进行模型训练或微调。但由于难以收集大规模高分辨率内容且需要大量计算资源,这项工作面临着巨大挑战。尽管先前已有若干研究提出了替代方案,但它们往往无法产生令人信服的结果。在本研究中,我们探索了扩散模型在其原始能力范围之外更高分辨率下的生成能力,并提出了一种新颖的渐进式方法,该方法充分利用已生成的低分辨率图像来引导更高分辨率图像的生成。我们的方法无需额外的训练或微调,从而显著降低了计算成本负担。大量实验和结果验证了我们方法的效率与有效性。项目页面:https://yhyun225.github.io/DiffuseHigh/