Diffusion models have recently advanced video restoration, but applying them to real-world video super-resolution (VSR) remains challenging due to high latency, prohibitive computation, and poor generalization to ultra-high resolutions. Our goal in this work is to make diffusion-based VSR practical by achieving efficiency, scalability, and real-time performance. To this end, we propose FlashVSR, the first diffusion-based one-step streaming framework towards real-time VSR. FlashVSR runs at approximately 17 FPS for 768x1408 videos on a single A100 GPU by combining three complementary innovations: (i) a train-friendly three-stage distillation pipeline that enables streaming super-resolution, (ii) locality-constrained sparse attention that cuts redundant computation while bridging the train-test resolution gap, and (iii) a tiny conditional decoder that accelerates reconstruction without sacrificing quality. To support large-scale training, we also construct VSR-120K, a new dataset with 120k videos and 180k images. Extensive experiments show that FlashVSR scales reliably to ultra-high resolutions and achieves state-of-the-art performance with up to 12x speedup over prior one-step diffusion VSR models. We will release the code, pretrained models, and dataset to foster future research in efficient diffusion-based VSR.
翻译:扩散模型近期在视频修复领域取得了进展,但将其应用于真实世界的视频超分辨率(VSR)仍面临挑战,原因包括高延迟、计算成本高昂以及对超高分辨率的泛化能力不足。本工作的目标是通过实现高效性、可扩展性和实时性能,使基于扩散模型的VSR走向实用化。为此,我们提出了FlashVSR,这是首个面向实时VSR的、基于扩散模型的一步式流式处理框架。通过结合三项互补的创新技术,FlashVSR在单张A100 GPU上处理768x1408视频时能达到约17 FPS的帧率:(i)一个支持训练的三阶段蒸馏流水线,实现了流式超分辨率处理;(ii)局部约束的稀疏注意力机制,在弥合训练-测试分辨率差距的同时削减了冗余计算;(iii)一个微型的条件解码器,在不牺牲质量的前提下加速了重建过程。为了支持大规模训练,我们还构建了VSR-120K数据集,包含12万个视频和18万张图像。大量实验表明,FlashVSR能够可靠地扩展到超高分辨率,并以优于现有一步式扩散VSR模型高达12倍的加速比,实现了最先进的性能。我们将公开代码、预训练模型和数据集,以促进未来在高效扩散模型VSR方向的研究。