Diffusion models have emerged as a powerful generative technology and have been found to be applicable in various scenarios. Most existing foundational diffusion models are primarily designed for text-guided visual generation and do not support multi-modal conditions, which are essential for many visual editing tasks. This limitation prevents these foundational diffusion models from serving as a unified model in the field of visual generation, like GPT-4 in the natural language processing field. In this work, we propose ACE, an All-round Creator and Editor, which achieves comparable performance compared to those expert models in a wide range of visual generation tasks. To achieve this goal, we first introduce a unified condition format termed Long-context Condition Unit (LCU), and propose a novel Transformer-based diffusion model that uses LCU as input, aiming for joint training across various generation and editing tasks. Furthermore, we propose an efficient data collection approach to address the issue of the absence of available training data. It involves acquiring pairwise images with synthesis-based or clustering-based pipelines and supplying these pairs with accurate textual instructions by leveraging a fine-tuned multi-modal large language model. To comprehensively evaluate the performance of our model, we establish a benchmark of manually annotated pairs data across a variety of visual generation tasks. The extensive experimental results demonstrate the superiority of our model in visual generation fields. Thanks to the all-in-one capabilities of our model, we can easily build a multi-modal chat system that responds to any interactive request for image creation using a single model to serve as the backend, avoiding the cumbersome pipeline typically employed in visual agents. Code and models will be available on the project page: https://ali-vilab.github.io/ace-page/.
翻译:扩散模型已成为一种强大的生成技术,并被证实适用于多种场景。现有的大多数基础扩散模型主要针对文本引导的视觉生成任务设计,不支持多模态条件输入,而这在许多视觉编辑任务中至关重要。这一局限性使得这些基础扩散模型无法像自然语言处理领域的GPT-4那样,成为视觉生成领域的统一模型。在本工作中,我们提出了ACE(全能型创作与编辑模型),该模型在广泛的视觉生成任务中达到了与专业模型相当的性能。为实现这一目标,我们首先引入了一种统一的条件格式——长上下文条件单元(LCU),并提出了一种基于Transformer的新型扩散模型,该模型以LCU作为输入,旨在实现跨多种生成与编辑任务的联合训练。此外,我们提出了一种高效的数据收集方法,以解决可用训练数据缺失的问题。该方法通过基于合成或聚类的流程获取成对图像,并利用微调后的多模态大语言模型为这些图像对提供精确的文本指令。为了全面评估模型性能,我们建立了一个涵盖多种视觉生成任务的人工标注图像对基准数据集。大量实验结果表明,我们的模型在视觉生成领域具有显著优势。得益于模型的一体化能力,我们可以轻松构建一个多模态对话系统,仅需单一模型作为后端即可响应任何图像创作的交互请求,从而避免了视觉代理通常采用的复杂流程。代码与模型将在项目页面发布:https://ali-vilab.github.io/ace-page/。