In this paper, we focus on resolving the problem of image outpainting, which aims to extrapolate the surrounding parts given the center contents of an image. Although recent works have achieved promising performance, the lack of versatility and customization hinders their practical applications in broader scenarios. Therefore, this work presents a novel image outpainting framework that is capable of customizing the results according to the requirement of users. First of all, we take advantage of a Multimodal Large Language Model (MLLM) that automatically extracts and organizes the corresponding textual descriptions of the masked and unmasked part of a given image. Accordingly, the obtained text prompts are introduced to endow our model with the capacity to customize the outpainting results. In addition, a special Cross-Attention module, namely Center-Total-Surrounding (CTS), is elaborately designed to enhance further the the interaction between specific space regions of the image and corresponding parts of the text prompts. Note that unlike most existing methods, our approach is very resource-efficient since it is just slightly fine-tuned on the off-the-shelf stable diffusion (SD) model rather than being trained from scratch. Finally, the experimental results on three commonly used datasets, i.e. Scenery, Building, and WikiArt, demonstrate our model significantly surpasses the SoTA methods. Moreover, versatile outpainting results are listed to show its customized ability.
翻译:本文致力于解决图像外绘问题,其目标是在给定图像中心内容的前提下推演并生成周边区域。尽管现有研究已取得显著进展,但缺乏通用性与可定制性限制了其在更广泛场景中的实际应用。为此,本研究提出一种新颖的图像外绘框架,能够根据用户需求定制生成结果。首先,我们利用多模态大语言模型自动提取并组织给定图像中掩蔽区域与非掩蔽区域对应的文本描述。通过引入所获得的文本提示,使模型具备定制外绘结果的能力。此外,本文精心设计了一种特殊的交叉注意力模块——中心-整体-环绕模块,以进一步增强图像特定空间区域与文本提示对应部分之间的交互。值得注意的是,与现有大多数方法不同,本方法具有极高的资源效率,仅需对现成的稳定扩散模型进行微调,而无需从头训练。最终,在Scenery、Building和WikiArt三个常用数据集上的实验结果表明,本模型显著超越了现有最优方法。同时,文中展示的多样化外绘结果验证了其定制化生成能力。