Diffusion models have emerged as frontrunners in text-to-image generation for their impressive capabilities. Nonetheless, their fixed image resolution during training often leads to challenges in high-resolution image generation, such as semantic inaccuracies and object replication. This paper introduces MegaFusion, a novel approach that extends existing diffusion-based text-to-image generation models towards efficient higher-resolution generation without additional fine-tuning or extra adaptation. Specifically, we employ an innovative truncate and relay strategy to bridge the denoising processes across different resolutions, allowing for high-resolution image generation in a coarse-to-fine manner. Moreover, by integrating dilated convolutions and noise re-scheduling, we further adapt the model's priors for higher resolution. The versatility and efficacy of MegaFusion make it universally applicable to both latent-space and pixel-space diffusion models, along with other derivative models. Extensive experiments confirm that MegaFusion significantly boosts the capability of existing models to produce images of megapixels and various aspect ratios, while only requiring about 40% of the original computational cost.
翻译:扩散模型因其卓越的性能已成为文本到图像生成领域的领先者。然而,其在训练过程中固定的图像分辨率往往给高分辨率图像生成带来挑战,例如语义不准确和物体重复。本文提出MegaFusion,一种新颖的方法,可在无需额外微调或适配的情况下,将现有的基于扩散的文本到图像生成模型扩展至高效的高分辨率生成。具体而言,我们采用一种创新的截断与接力策略,以桥接不同分辨率下的去噪过程,从而实现从粗到细的高分辨率图像生成。此外,通过集成空洞卷积和噪声重调度技术,我们进一步调整了模型先验以适应更高分辨率。MegaFusion的通用性和高效性使其可广泛应用于潜在空间和像素空间扩散模型以及其他衍生模型。大量实验证实,MegaFusion显著提升了现有模型生成百万像素级及多种宽高比图像的能力,同时仅需约40%的原始计算成本。