The diffusion model is widely leveraged for either video generation or video editing. As each field has its task-specific problems, it is difficult to merely develop a single diffusion for completing both tasks simultaneously. Video diffusion sorely relying on the text prompt can be adapted to unify the two tasks. However, it lacks a high capability of aligning heterogeneous modalities between text and image, leading to various misalignment problems. In this work, we are the first to propose a unified Multi-alignment Diffusion, dubbed as MagDiff, for both tasks of high-fidelity video generation and editing. The proposed MagDiff introduces three types of alignments, including subject-driven alignment, adaptive prompts alignment, and high-fidelity alignment. Particularly, the subject-driven alignment is put forward to trade off the image and text prompts, serving as a unified foundation generative model for both tasks. The adaptive prompts alignment is introduced to emphasize different strengths of homogeneous and heterogeneous alignments by assigning different values of weights to the image and the text prompts. The high-fidelity alignment is developed to further enhance the fidelity of both video generation and editing by taking the subject image as an additional model input. Experimental results on four benchmarks suggest that our method outperforms the previous method on each task.
翻译:扩散模型被广泛应用于视频生成或视频编辑领域。由于每个领域都存在其任务特定的问题,仅开发单一扩散模型难以同时完成这两项任务。完全依赖文本提示的视频扩散模型虽可被调整以统一这两项任务,但其在文本与图像异构模态对齐方面能力不足,导致多种错位问题。本研究首次提出一种统一的多对齐扩散模型(称为MagDiff),用于实现高保真视频生成与编辑任务。所提出的MagDiff引入三种对齐机制:主体驱动对齐、自适应提示对齐和高保真对齐。特别地,主体驱动对齐旨在权衡图像与文本提示,作为两项任务的统一基础生成模型;自适应提示对齐通过为图像和文本提示分配不同权重值,以强调同质与异质对齐的差异化优势;高保真对齐则通过将主体图像作为额外模型输入,进一步提升视频生成与编辑的保真度。在四个基准数据集上的实验结果表明,本方法在各项任务上均优于现有方法。