Image diffusion models have been adapted for real-world video super-resolution to tackle over-smoothing issues in GAN-based methods. However, these models struggle to maintain temporal consistency, as they are trained on static images, limiting their ability to capture temporal dynamics effectively. Integrating text-to-video (T2V) models into video super-resolution for improved temporal modeling is straightforward. However, two key challenges remain: artifacts introduced by complex degradations in real-world scenarios, and compromised fidelity due to the strong generative capacity of powerful T2V models (\textit{e.g.}, CogVideoX-5B). To enhance the spatio-temporal quality of restored videos, we introduce\textbf{~\name} (\textbf{S}patial-\textbf{T}emporal \textbf{A}ugmentation with T2V models for \textbf{R}eal-world video super-resolution), a novel approach that leverages T2V models for real-world video super-resolution, achieving realistic spatial details and robust temporal consistency. Specifically, we introduce a Local Information Enhancement Module (LIEM) before the global attention block to enrich local details and mitigate degradation artifacts. Moreover, we propose a Dynamic Frequency (DF) Loss to reinforce fidelity, guiding the model to focus on different frequency components across diffusion steps. Extensive experiments demonstrate\textbf{~\name}~outperforms state-of-the-art methods on both synthetic and real-world datasets.
翻译:基于图像扩散模型的方法已被应用于真实世界视频超分辨率,以解决基于GAN的方法中存在的过度平滑问题。然而,这些模型由于在静态图像上训练,难以保持时间一致性,限制了其有效捕捉时间动态的能力。将文本到视频(T2V)模型集成到视频超分辨率中以改进时间建模是直接的。然而,仍存在两个关键挑战:真实场景中复杂退化引入的伪影,以及强大T2V模型(例如CogVideoX-5B)因其强生成能力而导致保真度受损。为提升复原视频的时空质量,我们提出了\textbf{~STAR}(基于T2V模型的\textbf{S}patial-\textbf{T}emporal \textbf{A}ugmentation for \textbf{R}eal-world video super-resolution),一种利用T2V模型进行真实世界视频超分辨率的新方法,实现了逼真的空间细节和鲁棒的时间一致性。具体而言,我们在全局注意力块之前引入了局部信息增强模块(LIEM),以丰富局部细节并减轻退化伪影。此外,我们提出了动态频率(DF)损失来增强保真度,引导模型在扩散步骤中关注不同的频率分量。大量实验表明,\textbf{~STAR}~在合成和真实世界数据集上均优于现有最先进方法。