Building on the success of diffusion models in image generation and editing, video editing has recently gained substantial attention. However, maintaining temporal consistency and motion alignment still remains challenging. To address these issues, this paper proposes DINO-guided Video Editing (DIVE), a framework designed to facilitate subject-driven editing in source videos conditioned on either target text prompts or reference images with specific identities. The core of DIVE lies in leveraging the powerful semantic features extracted from a pretrained DINOv2 model as implicit correspondences to guide the editing process. Specifically, to ensure temporal motion consistency, DIVE employs DINO features to align with the motion trajectory of the source video. Extensive experiments on diverse real-world videos demonstrate that our framework can achieve high-quality editing results with robust motion consistency, highlighting the potential of DINO to contribute to video editing. For precise subject editing, DIVE incorporates the DINO features of reference images into a pretrained text-to-image model to learn Low-Rank Adaptations (LoRAs), effectively registering the target subject's identity. Project page: https://dino-video-editing.github.io
翻译:基于扩散模型在图像生成与编辑领域的成功,视频编辑近期获得了广泛关注。然而,保持时间一致性与运动对齐仍然具有挑战性。为解决这些问题,本文提出了DINO引导的视频编辑(DIVE),这是一个旨在促进源视频中进行主体驱动编辑的框架,其条件可以是目标文本提示,也可以是具有特定身份的参考图像。DIVE的核心在于利用从预训练的DINOv2模型中提取的强大语义特征作为隐式对应关系来指导编辑过程。具体而言,为确保时间运动一致性,DIVE采用DINO特征与源视频的运动轨迹进行对齐。在多样化真实世界视频上进行的大量实验表明,我们的框架能够实现具有鲁棒运动一致性的高质量编辑结果,凸显了DINO在视频编辑领域的潜力。为实现精确的主体编辑,DIVE将参考图像的DINO特征融入预训练的文本到图像模型中,以学习低秩自适应(LoRAs),从而有效注册目标主体的身份。项目页面:https://dino-video-editing.github.io