Diffusion models have demonstrated promising performance in real-world video super-resolution (VSR). However, the dozens of sampling steps they require, make inference extremely slow. Sampling acceleration techniques, particularly single-step, provide a potential solution. Nonetheless, achieving one step in VSR remains challenging, due to the high training overhead on video data and stringent fidelity demands. To tackle the above issues, we propose DOVE, an efficient one-step diffusion model for real-world VSR. DOVE is obtained by fine-tuning a pretrained video diffusion model (i.e., CogVideoX). To effectively train DOVE, we introduce the latent-pixel training strategy. The strategy employs a two-stage scheme to gradually adapt the model to the video super-resolution task. Meanwhile, we design a video processing pipeline to construct a high-quality dataset tailored for VSR, termed HQ-VSR. Fine-tuning on this dataset further enhances the restoration capability of DOVE. Extensive experiments show that DOVE exhibits comparable or superior performance to multi-step diffusion-based VSR methods. It also offers outstanding inference efficiency, achieving up to a 28$\times$ speed-up over existing methods such as MGLD-VSR. Code is available at: https://github.com/zhengchen1999/DOVE.
翻译:扩散模型在真实世界视频超分辨率任务中展现出优越性能,但其需要数十步采样过程,导致推理速度极慢。采样加速技术(特别是单步采样)提供了潜在解决方案。然而,由于视频数据的高训练开销和严格的保真度要求,在视频超分辨率中实现单步采样仍具挑战性。为解决上述问题,我们提出DOVE——一种面向真实世界视频超分辨率的高效单步扩散模型。DOVE通过对预训练视频扩散模型(即CogVideoX)进行微调获得。为有效训练DOVE,我们提出隐空间-像素联合训练策略。该策略采用两阶段方案,逐步使模型适应视频超分辨率任务。同时,我们设计了视频处理流程,构建了专为视频超分辨率定制的高质量数据集HQ-VSR。在该数据集上的微调进一步增强了DOVE的复原能力。大量实验表明,DOVE展现出与多步扩散视频超分辨率方法相当或更优的性能,同时具备卓越的推理效率,较现有方法(如MGLD-VSR)实现高达28倍的加速。代码发布于:https://github.com/zhengchen1999/DOVE。