Producing large images using small diffusion models is gaining increasing popularity, as the cost of training large models could be prohibitive. A common approach involves jointly generating a series of overlapped image patches and obtaining large images by merging adjacent patches. However, results from existing methods often exhibit obvious artifacts, e.g., seams and inconsistent objects and styles. To address the issues, we proposed Guided Fusion (GF), which mitigates the negative impact from distant image regions by applying a weighted average to the overlapping regions. Moreover, we proposed Variance-Corrected Fusion (VCF), which corrects data variance at post-averaging, generating more accurate fusion for the Denoising Diffusion Probabilistic Model. Furthermore, we proposed a one-shot Style Alignment (SA), which generates a coherent style for large images by adjusting the initial input noise without adding extra computational burden. Extensive experiments demonstrated that the proposed fusion methods improved the quality of the generated image significantly. As a plug-and-play module, the proposed method can be widely applied to enhance other fusion-based methods for large image generation.
翻译:利用小型扩散模型生成大尺寸图像正日益流行,因为训练大型模型的成本可能过高。一种常见方法涉及联合生成一系列重叠的图像块,并通过合并相邻块来获得大图像。然而,现有方法的结果往往表现出明显的伪影,例如接缝、不一致的物体和风格。为解决这些问题,我们提出了引导式融合(GF),通过对重叠区域应用加权平均来减轻远处图像区域的负面影响。此外,我们提出了方差校正融合(VCF),该方法在后平均阶段校正数据方差,为去噪扩散概率模型生成更准确的融合结果。进一步地,我们提出了单次风格对齐(SA),该方法通过调整初始输入噪声来为大图像生成一致的风格,而无需增加额外的计算负担。大量实验表明,所提出的融合方法显著提高了生成图像的质量。作为一个即插即用模块,该方法可广泛应用于增强其他基于融合的大图像生成方法。