The field of text-to-image (T2I) generation has made significant progress in recent years, largely driven by advancements in diffusion models. Linguistic control enables effective content creation, but struggles with fine-grained control over image generation. This challenge has been explored, to a great extent, by incorporating additional user-supplied spatial conditions, such as depth maps and edge maps, into pre-trained T2I models through extra encoding. However, multi-control image synthesis still faces several challenges. Specifically, current approaches are limited in handling free combinations of diverse input control signals, overlook the complex relationships among multiple spatial conditions, and often fail to maintain semantic alignment with provided textual prompts. This can lead to suboptimal user experiences. To address these challenges, we propose AnyControl, a multi-control image synthesis framework that supports arbitrary combinations of diverse control signals. AnyControl develops a novel Multi-Control Encoder that extracts a unified multi-modal embedding to guide the generation process. This approach enables a holistic understanding of user inputs, and produces high-quality, faithful results under versatile control signals, as demonstrated by extensive quantitative and qualitative evaluations. Our project page is available in \url{https://any-control.github.io}.
翻译:近年来,文本到图像(T2I)生成领域取得了显著进展,这主要得益于扩散模型的进步。语言控制能够实现有效的内容创作,但在对图像生成进行细粒度控制方面仍面临挑战。通过额外编码将用户提供的额外空间条件(如深度图和边缘图)整合到预训练的T2I模型中,这一挑战在很大程度上得到了探索。然而,多控制图像合成仍然面临若干挑战。具体而言,现有方法在处理多样化输入控制信号的自由组合方面能力有限,忽视了多个空间条件之间的复杂关系,并且常常无法保持与所提供文本提示的语义对齐。这可能导致用户体验不佳。为了解决这些挑战,我们提出了AnyControl,这是一个支持多样化控制信号任意组合的多控制图像合成框架。AnyControl开发了一种新颖的多控制编码器,该编码器提取统一的多模态嵌入来指导生成过程。这种方法能够实现对用户输入的整体理解,并在多样化的控制信号下生成高质量、忠实的结果,广泛的定量和定性评估证明了这一点。我们的项目页面可在 \url{https://any-control.github.io} 访问。