Diffusion models have emerged as frontrunners in text-to-image generation, but their fixed image resolution during training often leads to challenges in high-resolution image generation, such as semantic deviations and object replication. This paper introduces MegaFusion, a novel approach that extends existing diffusion-based text-to-image models towards efficient higher-resolution generation without additional fine-tuning or adaptation. Specifically, we employ an innovative truncate and relay strategy to bridge the denoising processes across different resolutions, allowing for high-resolution image generation in a coarse-to-fine manner. Moreover, by integrating dilated convolutions and noise re-scheduling, we further adapt the model's priors for higher resolution. The versatility and efficacy of MegaFusion make it universally applicable to both latent-space and pixel-space diffusion models, along with other derivative models. Extensive experiments confirm that MegaFusion significantly boosts the capability of existing models to produce images of megapixels and various aspect ratios, while only requiring about 40% of the original computational cost.
翻译:扩散模型已成为文本到图像生成领域的领先技术,但其训练时固定的图像分辨率常导致高分辨率图像生成面临挑战,例如语义偏差和物体重复。本文提出MegaFusion,一种无需额外微调或适配即可将现有基于扩散的文本到图像模型扩展至高效高分辨率生成的新方法。具体而言,我们采用一种创新的截断与接力策略,以桥接不同分辨率间的去噪过程,从而实现从粗到细的高分辨率图像生成。此外,通过集成空洞卷积和噪声重调度技术,我们进一步调整了模型先验以适应更高分辨率。MegaFusion的通用性和高效性使其可普遍适用于潜在空间和像素空间扩散模型,以及其他衍生模型。大量实验证实,MegaFusion显著提升了现有模型生成百万像素及多种宽高比图像的能力,同时仅需约40%的原始计算成本。