Recent text-to-video (T2V) diffusion models have demonstrated impressive generation capabilities across various domains. However, these models often generate videos that have misalignments with text prompts, especially when the prompts describe complex scenes with multiple objects and attributes. To address this, we introduce VideoRepair, a novel model-agnostic, training-free video refinement framework that automatically identifies fine-grained text-video misalignments and generates explicit spatial and textual feedback, enabling a T2V diffusion model to perform targeted, localized refinements. VideoRepair consists of four stages: In (1) video evaluation, we detect misalignments by generating fine-grained evaluation questions and answering those questions with MLLM. In (2) refinement planning, we identify accurately generated objects and then create localized prompts to refine other areas in the video. Next, in (3) region decomposition, we segment the correctly generated area using a combined grounding module. We regenerate the video by adjusting the misaligned regions while preserving the correct regions in (4) localized refinement. On two popular video generation benchmarks (EvalCrafter and T2V-CompBench), VideoRepair substantially outperforms recent baselines across various text-video alignment metrics. We provide a comprehensive analysis of VideoRepair components and qualitative examples.
翻译:近期,文本到视频(T2V)扩散模型已在多个领域展现出卓越的生成能力。然而,这些模型生成的视频常与文本提示存在错位,尤其在提示描述包含多对象与属性的复杂场景时更为明显。为此,我们提出VideoRepair——一种新颖的模型无关、无需训练的精细化视频优化框架,能够自动识别细粒度的文本-视频错位并生成显式的空间与文本反馈,从而使T2V扩散模型能够执行针对性的局部优化。VideoRepair包含四个阶段:(1)视频评估阶段,通过生成细粒度评估问题并利用多模态大语言模型(MLLM)进行回答以检测错位;(2)优化规划阶段,识别已准确生成的对象,进而创建局部化提示以优化视频中其他区域;(3)区域分解阶段,通过组合式接地模块分割正确生成的区域;(4)局部优化阶段,在保持正确区域的同时调整错位区域以重新生成视频。在两个主流视频生成基准测试(EvalCrafter与T2V-CompBench)上,VideoRepair在多项文本-视频对齐指标上均显著优于近期基线方法。本文对VideoRepair各组件进行了全面分析,并提供了定性示例。