Diffusion-based generative models have recently shown remarkable image and video editing capabilities. However, local video editing, particularly removal of small attributes like glasses, remains a challenge. Existing methods either alter the videos excessively, generate unrealistic artifacts, or fail to perform the requested edit consistently throughout the video. In this work, we focus on consistent and identity-preserving removal of glasses in videos, using it as a case study for consistent local attribute removal in videos. Due to the lack of paired data, we adopt a weakly supervised approach and generate synthetic imperfect data, using an adjusted pretrained diffusion model. We show that despite data imperfection, by learning from our generated data and leveraging the prior of pretrained diffusion models, our model is able to perform the desired edit consistently while preserving the original video content. Furthermore, we exemplify the generalization ability of our method to other local video editing tasks by applying it successfully to facial sticker-removal. Our approach demonstrates significant improvement over existing methods, showcasing the potential of leveraging synthetic data and strong video priors for local video editing tasks.
翻译:基于扩散的生成模型近期在图像与视频编辑领域展现出卓越能力。然而,局部视频编辑——尤其是眼镜等细小属性的去除——仍面临挑战。现有方法要么对视频产生过度修改,生成不真实的伪影,要么无法在整个视频中保持编辑请求的一致性。本工作聚焦于视频中眼镜的一致性及身份保持性去除,并将其作为视频中局部属性一致性去除的案例研究。由于缺乏配对数据,我们采用弱监督方法,通过调整预训练的扩散模型生成具有缺陷的合成数据。研究表明,尽管数据存在缺陷,但通过从生成数据中学习并利用预训练扩散模型的先验知识,我们的模型能够在保持原始视频内容的同时,持续执行所需的编辑操作。此外,我们通过成功应用于面部贴纸去除任务,验证了该方法在其他局部视频编辑任务上的泛化能力。相较于现有方法,本方法展现出显著改进,彰显了利用合成数据与强大视频先验知识处理局部视频编辑任务的潜力。