Despite substantial progress in text-to-video generation, achieving precise and flexible control over fine-grained spatiotemporal attributes remains a significant unresolved challenge in video generation research. To address these limitations, we introduce VCtrl (also termed PP-VCtrl), a novel framework designed to enable fine-grained control over pre-trained video diffusion models in a unified manner. VCtrl integrates diverse user-specified control signals-such as Canny edges, segmentation masks, and human keypoints-into pretrained video diffusion models via a generalizable conditional module capable of uniformly encoding multiple types of auxiliary signals without modifying the underlying generator. Additionally, we design a unified control signal encoding pipeline and a sparse residual connection mechanism to efficiently incorporate control representations. Comprehensive experiments and human evaluations demonstrate that VCtrl effectively enhances controllability and generation quality. The source code and pre-trained models are publicly available and implemented using the PaddlePaddle framework at http://github.com/PaddlePaddle/PaddleMIX/tree/develop/ppdiffusers/examples/ppvctrl.
翻译:尽管文本到视频生成已取得显著进展,但在视频生成研究中,如何实现对细粒度时空属性的精确灵活控制,仍然是一个尚未解决的重要挑战。为应对这些局限性,我们提出了VCtrl(亦称PP-VCtrl),这是一个新颖的框架,旨在以统一的方式实现对预训练视频扩散模型的细粒度控制。VCtrl通过一个可泛化的条件模块,将用户指定的多样化控制信号——如Canny边缘、分割掩码和人体关键点——集成到预训练的视频扩散模型中。该模块能够统一编码多种类型的辅助信号,而无需修改底层生成器。此外,我们设计了一个统一的控制信号编码流程和一个稀疏残差连接机制,以高效地融入控制表征。全面的实验和人工评估表明,VCtrl有效地增强了可控性和生成质量。源代码和预训练模型已公开,并使用PaddlePaddle框架实现,地址为:http://github.com/PaddlePaddle/PaddleMIX/tree/develop/ppdiffusers/examples/ppvctrl。