Benefiting from large-scale pre-training of text-video pairs, current text-to-video (T2V) diffusion models can generate high-quality videos from the text description. Besides, given some reference images or videos, the parameter-efficient fine-tuning method, i.e. LoRA, can generate high-quality customized concepts, e.g., the specific subject or the motions from a reference video. However, combining the trained multiple concepts from different references into a single network shows obvious artifacts. To this end, we propose CustomTTT, where we can joint custom the appearance and the motion of the given video easily. In detail, we first analyze the prompt influence in the current video diffusion model and find the LoRAs are only needed for the specific layers for appearance and motion customization. Besides, since each LoRA is trained individually, we propose a novel test-time training technique to update parameters after combination utilizing the trained customized models. We conduct detailed experiments to verify the effectiveness of the proposed methods. Our method outperforms several state-of-the-art works in both qualitative and quantitative evaluations.
翻译:得益于大规模文本-视频对预训练,当前基于文本到视频(T2V)的扩散模型能够根据文本描述生成高质量视频。此外,给定若干参考图像或视频,参数高效的微调方法(如LoRA)能够生成高质量的定制化概念,例如参考视频中的特定主体或运动模式。然而,将来自不同参考源的多个已训练概念组合至单一网络时会出现明显伪影。为此,我们提出CustomTTT方法,能够便捷地联合定制给定视频的外观与运动特征。具体而言,我们首先分析了提示词在当前视频扩散模型中的影响机制,发现仅需在特定网络层应用LoRA即可实现外观与运动的定制化。此外,由于每个LoRA模块均为独立训练,我们提出一种新颖的测试时训练技术,在组合已训练的定制化模型后对参数进行动态更新。通过详尽的实验验证了所提方法的有效性。在定性与定量评估中,本方法均优于多种前沿研究成果。