Exposure correction aims to enhance visual data suffering from improper exposures, which can greatly improve satisfactory visual effects. However, previous methods mainly focus on the image modality, and the video counterpart is less explored in the literature. Directly applying prior image-based methods to videos results in temporal incoherence with low visual quality. Through thorough investigation, we find that the development of relevant communities is limited by the absence of a benchmark dataset. Therefore, in this paper, we construct the first real-world paired video dataset, including both underexposure and overexposure dynamic scenes. To achieve spatial alignment, we utilize two DSLR cameras and a beam splitter to simultaneously capture improper and normal exposure videos. Additionally, we propose an end-to-end video exposure correction network, in which a dual-stream module is designed to deal with both underexposure and overexposure factors, enhancing the illumination based on Retinex theory. The extensive experiments based on various metrics and user studies demonstrate the significance of our dataset and the effectiveness of our method. The code and dataset are available at https://github.com/kravrolens/VECNet.
翻译:曝光校正旨在改善因不当曝光而受损的视觉数据,这能显著提升令人满意的视觉效果。然而,先前的方法主要集中于图像模态,视频模态在文献中较少被探索。直接将先前的基于图像的方法应用于视频会导致时间不一致性和较低的视觉质量。通过深入调研,我们发现相关领域的发展受限于基准数据集的缺失。因此,本文构建了首个真实世界配对视频数据集,包含欠曝光和过曝光的动态场景。为实现空间对齐,我们利用两台数码单反相机和一个分束器同时捕捉不当曝光和正常曝光的视频。此外,我们提出了一种端到端的视频曝光校正网络,其中设计了一个双流模块来处理欠曝光和过曝光因素,基于Retinex理论增强光照。基于多种指标和用户研究的广泛实验证明了我们数据集的重要性以及我们方法的有效性。代码和数据集可在 https://github.com/kravrolens/VECNet 获取。