Video relighting offers immense creative potential and commercial value but is hindered by challenges, including the absence of an adequate evaluation metric, severe light flickering, and the degradation of fine-grained details during editing. To overcome these challenges, we introduce Hi-Light, a novel, training-free framework for high-fidelity, high-resolution, robust video relighting. Our approach introduces three technical innovations: lightness prior anchored guided relighting diffusion that stabilises intermediate relit video, a Hybrid Motion-Adaptive Lighting Smoothing Filter that leverages optical flow to ensure temporal stability without introducing motion blur, and a LAB-based Detail Fusion module that preserves high-frequency detail information from the original video. Furthermore, to address the critical gap in evaluation, we propose the Light Stability Score, the first quantitative metric designed to specifically measure lighting consistency. Extensive experiments demonstrate that Hi-Light significantly outperforms state-of-the-art methods in both qualitative and quantitative comparisons, producing stable, highly detailed relit videos.
翻译:视频重光照技术具有巨大的创作潜力和商业价值,但其发展受到诸多挑战的阻碍,包括缺乏适当的评估指标、严重的光照闪烁问题,以及在编辑过程中细粒度细节的退化。为克服这些挑战,我们提出了Hi-Light,一种新颖的、无需训练的高保真、高分辨率、鲁棒的视频重光照框架。我们的方法引入了三项技术创新:基于亮度先验锚定的引导重光照扩散模型,用于稳定中间重光照视频;混合运动自适应光照平滑滤波器,利用光流确保时间稳定性且不引入运动模糊;以及基于LAB空间的细节融合模块,用于保留原始视频的高频细节信息。此外,为弥补评估方面的关键空白,我们提出了光照稳定性分数,这是首个专门用于量化测量光照一致性的指标。大量实验表明,Hi-Light在定性和定量比较中均显著优于现有最先进方法,能够生成稳定且细节丰富的重光照视频。