Recent advancements in text-to-video (T2V) generation have leveraged diffusion models to enhance visual coherence in videos synthesized from textual descriptions. However, existing research primarily focuses on object motion, often overlooking cinematic language, which is crucial for conveying emotion and narrative pacing in cinematography. To address this, we propose a threefold approach to improve cinematic control in T2V models. First, we introduce a meticulously annotated cinematic language dataset with twenty subcategories, covering shot framing, shot angles, and camera movements, enabling models to learn diverse cinematic styles. Second, we present CameraDiff, which employs LoRA for precise and stable cinematic control, ensuring flexible shot generation. Third, we propose CameraCLIP, designed to evaluate cinematic alignment and guide multi-shot composition. Building on CameraCLIP, we introduce CLIPLoRA, a CLIP-guided dynamic LoRA composition method that adaptively fuses multiple pre-trained cinematic LoRAs, enabling smooth transitions and seamless style blending. Experimental results demonstrate that CameraDiff ensures stable and precise cinematic control, CameraCLIP achieves an R@1 score of 0.83, and CLIPLoRA significantly enhances multi-shot composition within a single video, bridging the gap between automated video generation and professional cinematography.\textsuperscript{1}
翻译:近年来,文本到视频(T2V)生成技术的进步利用扩散模型增强了根据文本描述合成视频的视觉连贯性。然而,现有研究主要关注物体运动,往往忽视了电影语言,而电影语言对于传达电影摄影中的情感和叙事节奏至关重要。为解决这一问题,我们提出了一种三重方法来改进T2V模型中的电影化控制。首先,我们引入了一个精心标注的电影语言数据集,包含二十个子类别,涵盖镜头构图、拍摄角度和摄像机运动,使模型能够学习多样化的电影风格。其次,我们提出了CameraDiff,它利用LoRA实现精确且稳定的电影化控制,确保灵活的镜头生成。第三,我们提出了CameraCLIP,旨在评估电影化对齐并指导多镜头构图。基于CameraCLIP,我们引入了CLIPLoRA,这是一种由CLIP引导的动态LoRA组合方法,能够自适应地融合多个预训练的电影化LoRA,从而实现平滑的过渡和无缝的风格融合。实验结果表明,CameraDiff确保了稳定且精确的电影化控制,CameraCLIP的R@1得分达到0.83,并且CLIPLoRA显著增强了单个视频内的多镜头构图能力,从而弥合了自动化视频生成与专业电影摄影之间的差距。\textsuperscript{1}