The field of text-to-image (T2I) generation has made significant progress in recent years, largely driven by advancements in diffusion models. Linguistic control enables effective content creation, but struggles with fine-grained control over image generation. This challenge has been explored, to a great extent, by incorporating additional user-supplied spatial conditions, such as depth maps and edge maps, into pre-trained T2I models through extra encoding. However, multi-control image synthesis still faces several challenges. Specifically, current approaches are limited in handling free combinations of diverse input control signals, overlook the complex relationships among multiple spatial conditions, and often fail to maintain semantic alignment with provided textual prompts. This can lead to suboptimal user experiences. To address these challenges, we propose AnyControl, a multi-control image synthesis framework that supports arbitrary combinations of diverse control signals. AnyControl develops a novel Multi-Control Encoder that extracts a unified multi-modal embedding to guide the generation process. This approach enables a holistic understanding of user inputs, and produces high-quality, faithful results under versatile control signals, as demonstrated by extensive quantitative and qualitative evaluations. Our project page is available in https://any-control.github.io.
翻译:近年来,文本到图像(T2I)生成领域取得了显著进展,这主要得益于扩散模型的进步。语言控制能够实现有效的内容创作,但在图像生成的细粒度控制方面仍面临困难。通过额外编码将用户提供的空间条件(如深度图和边缘图)整合到预训练的T2I模型中,已在很大程度上探索了这一挑战。然而,多控制图像合成仍面临若干问题。具体而言,现有方法在处理多样化输入控制信号的自由组合方面能力有限,忽视了多个空间条件之间的复杂关系,并且常常无法与提供的文本提示保持语义对齐。这可能导致用户体验欠佳。为应对这些挑战,我们提出了AnyControl,这是一个支持多种控制信号任意组合的多控制图像合成框架。AnyControl开发了一种新颖的多控制编码器,能够提取统一的多模态嵌入来指导生成过程。该方法实现了对用户输入的整体理解,并在多样化的控制信号下生成高质量、忠实的结果,广泛的定量与定性评估证实了其有效性。我们的项目页面可在 https://any-control.github.io 访问。