Diffusion models (DMs) have demonstrated exceptional success in video super-resolution (VSR), showcasing a powerful capacity for generating fine-grained details. However, their potential for space-time video super-resolution (STVSR), which necessitates not only recovering realistic visual content from low-resolution to high-resolution but also improving the frame rate with coherent temporal dynamics, remains largely underexplored. Moreover, existing STVSR methods predominantly address spatiotemporal upsampling under simplified degradation assumptions, which often struggle in real-world scenarios with complex unknown degradations. Such a high demand for reconstruction fidelity and temporal consistency makes the development of a robust STVSR framework particularly non-trivial. To address these challenges, we propose OSDEnhancer, a novel framework that, to the best of our knowledge, represents the first method to achieve real-world STVSR through an efficient one-step diffusion process. OSDEnhancer initializes essential spatiotemporal structures through a linear pre-interpolation strategy and pivots on training temporal refinement and spatial enhancement mixture of experts (TR-SE MoE), which allows distinct expert pathways to progressively learn robust, specialized representations for temporal coherence and spatial detail, further collaboratively reinforcing each other during inference. A bidirectional deformable variational autoencoder (VAE) decoder is further introduced to perform recurrent spatiotemporal aggregation and propagation, enhancing cross-frame reconstruction fidelity. Experiments demonstrate that the proposed method achieves state-of-the-art performance while maintaining superior generalization capability in real-world scenarios.
翻译:扩散模型在视频超分辨率任务中已展现出卓越的成功,显示出生成精细细节的强大能力。然而,其在时空视频超分辨率方面的潜力——该任务不仅需要从低分辨率恢复真实感视觉内容,还需提升帧率并保持连贯的时间动态——在很大程度上仍未得到充分探索。此外,现有的时空视频超分辨率方法主要基于简化的退化假设处理时空上采样,在具有复杂未知退化的真实场景中往往表现不佳。对重建保真度和时间一致性的高要求,使得开发一个鲁棒的时空视频超分辨率框架尤为困难。为应对这些挑战,我们提出了OSDEnhancer,据我们所知,这是首个通过高效的一步扩散过程实现真实世界时空视频超分辨率的新框架。OSDEnhancer通过线性预插值策略初始化关键的时空结构,并围绕训练时间细化与空间增强的专家混合模型展开,该模型允许不同的专家路径逐步学习鲁棒的、专门针对时间一致性与空间细节的表征,并在推理过程中进一步协同增强彼此。我们还引入了双向可变形变分自编码器解码器,以执行循环的时空聚合与传播,从而提升跨帧重建保真度。实验表明,所提方法在真实场景中实现了最先进的性能,同时保持了卓越的泛化能力。