As an ever-increasing demand for high dynamic range (HDR) scene shooting, multi-exposure image fusion (MEF) technology has abounded. In recent years, multi-scale exposure fusion approaches based on detail-enhancement have led the way for improvement in highlight and shadow details. Most of such methods, however, are too computationally expensive to be deployed on mobile devices. This paper presents a perceptual multi-exposure fusion method that not just ensures fine shadow/highlight details but with lower complexity than detailenhanced methods. We analyze the potential defects of three classical exposure measures in lieu of using detail-enhancement component and improve two of them, namely adaptive Wellexposedness (AWE) and the gradient of color images (3-D gradient). AWE designed in YCbCr color space considers the difference between varying exposure images. 3-D gradient is employed to extract fine details. We build a large-scale multiexposure benchmark dataset suitable for static scenes, which contains 167 image sequences all told. Experiments on the constructed dataset demonstrate that the proposed method exceeds existing eight state-of-the-art approaches in terms of visually and MEF-SSIM value. Moreover, our approach can achieve a better improvement for current image enhancement techniques, ensuring fine detail in bright light.
翻译:随着对高动态范围(HDR)场景拍摄的需求不断增长,多曝光图像融合(MEF)技术已大量涌现。近年来,基于细节增强的多尺度曝光融合方法在提升高光和阴影细节方面引领了改进方向。然而,此类方法大多计算成本高昂,难以部署在移动设备上。本文提出一种感知多曝光融合方法,不仅能确保精细的阴影/高光细节,且其复杂度低于细节增强方法。我们分析了三种经典曝光度量的潜在缺陷,并改进了其中两种,即自适应良好曝光度(AWE)和彩色图像的梯度(3-D梯度),以替代使用细节增强组件。在YCbCr色彩空间中设计的AWE考虑了不同曝光图像之间的差异。3-D梯度被用于提取精细细节。我们构建了一个适用于静态场景的大规模多曝光基准数据集,共包含167个图像序列。在所构建数据集上的实验表明,所提方法在视觉质量和MEF-SSIM值方面均超越了现有的八种先进方法。此外,我们的方法能够为当前图像增强技术带来更好的改进,确保在强光下仍能保留精细细节。