Reconstructing High Dynamic Range (HDR) videos from sequences of alternating-exposure Low Dynamic Range (LDR) frames remains highly challenging, especially under dynamic scenes where cross-exposure inconsistencies and complex motion make inter-frame alignment difficult, leading to ghosting and detail loss. Existing methods often suffer from inaccurate alignment, suboptimal feature aggregation, and degraded reconstruction quality in motion-dominated regions. To address these challenges, we propose $\text{F}^2\text{HDR}$, a two-stage HDR video reconstruction framework that robustly perceives inter-frame motion and restores fine details in complex dynamic scenarios. The proposed framework integrates a flow adapter that adapts generic optical flow for robust cross-exposure alignment, a physical motion modeling to identify salient motion regions, and a motion-aware refinement network that aggregates complementary information while removing ghosting and noise. Extensive experiments demonstrate that $\text{F}^2\text{HDR}$ achieves state-of-the-art performance on real-world HDR video benchmarks, producing ghost-free and high-fidelity results under large motion and exposure variations.
翻译:从交替曝光的低动态范围帧序列重建高动态范围视频仍然极具挑战性,尤其是在动态场景下,跨曝光不一致性与复杂运动使得帧间对齐困难,从而导致重影和细节丢失。现有方法常存在对齐不准确、特征聚合欠佳以及在运动主导区域重建质量下降的问题。为解决这些挑战,我们提出了$\text{F}^2\text{HDR}$,一个两阶段的高动态范围视频重建框架,它能够鲁棒地感知帧间运动并在复杂动态场景中恢复精细细节。该框架集成了一个流适配器,用于适配通用光流以实现鲁棒的跨曝光对齐;一个物理运动建模模块,用于识别显著运动区域;以及一个运动感知的细化网络,用于聚合互补信息同时消除重影与噪声。大量实验表明,$\text{F}^2\text{HDR}$在真实世界的高动态范围视频基准测试中取得了最先进的性能,能够在大幅度运动与曝光变化下产生无重影、高保真的结果。