In this paper, we propose a deep snapshot high dynamic range (HDR) imaging framework that can effectively reconstruct an HDR image from the RAWdata captured using a multi-exposure color filter array (ME-CFA), which consists of a mosaic pattern of RGB filters with different exposure levels. To effectively learn the HDR image reconstruction network, we introduce the idea of luminance normalization that simultaneously enables effective loss computation and input data normalization by considering relative local contrasts in the “normalized-by-luminance” HDR domain. This idea makes it possible to equally handle the errors in both bright and dark areas regardless of absolute luminance levels, which significantly improves the visual image quality in a tone-mapped domain. Experimental results using two public HDR image datasets demonstrate that our framework outperforms other snapshot methods and produces high-quality HDR images with fewer visual artifacts.
- Paper [PDF]
- Supplementary Material [PDF]
- Supplementary Video [MP4]
- Presentation Slides [PDF]
- Excutable Code [ZIP (372MB)]
(If you have any questions, please ask Yusuke Monno)
- Deep Snapshot HDR Imaging Using Multi-Exposure Color Filter Array
Takeru Suda, Masayuki Tanaka, Yusuke Monno, and Masatoshi Okutomi
Asian Conference on Computer Vision (ACCV), 2020.