Benefiting from large-scale pre-training of text-video pairs, current text-to-video (T2V) diffusion models can generate high-quality videos from the text description. Besides, given some reference images or videos, the parameter-efficient fine-tuning method, i.e. LoRA, can generate high-quality customized concepts, e.g., the specific subject or the motions from a reference video. However, combining the trained multiple concepts from different references into a single network shows obvious artifacts. To this end, we propose CustomTTT, where we can joint custom the appearance and the motion of the given video easily. In detail, we first analyze the prompt influence in the current video diffusion model and find the LoRAs are only needed for the specific layers for appearance and motion customization. Besides, since each LoRA is trained individually, we propose a novel test-time training technique to update parameters after combination utilizing the trained customized models. We conduct detailed experiments to verify the effectiveness of the proposed methods. Our method outperforms several state-of-the-art works in both qualitative and quantitative evaluations.
翻译:得益于大规模文本-视频对预训练,当前文本到视频(T2V)扩散模型已能够根据文本描述生成高质量视频。此外,给定参考图像或视频,参数高效的微调方法(如LoRA)可生成高质量的定制化概念,例如参考视频中的特定主体或运动模式。然而,将不同参考源训练得到的多个概念组合至单一网络时会出现明显伪影。为此,我们提出CustomTTT方法,能够便捷地联合定制给定视频的外观与运动特征。具体而言,我们首先分析了提示词在当前视频扩散模型中的影响机制,发现仅需在特定网络层应用LoRA即可实现外观与运动的定制化。同时,针对各LoRA模块独立训练的特性,我们提出一种新颖的测试时训练技术,在组合定制化模型后通过参数更新实现协同优化。详尽的实验验证了所提方法的有效性,在定性与定量评估中均优于多个前沿方法。