30-04-2014, 03:42 PM
Multiple exposure fusion for high dynamic range image acquisition[/b]
Abstract
A multiple exposure fusion to enhance the dynamic
range of an image is proposed. The construction of high dynamic
range images (HDRI) is performed by combining multiple images
taken with different exposures and estimating the irradiance
value for each pixel. This is a common process for HDRI
acquisition. During this process, displacements of the images
caused by object movements often yield motion blur and ghosting
artifacts. To address the problem, this paper presents an efficient
and accurate multiple exposure fusion technique for the HDRI
acquisition. Our method estimates displacements, occlusion and
saturated regions simultaneously by using MAP(Maximum a
Posteriori) estimation, and constructs motion blur free HDRIs.
We also propose a new weighting scheme for the multiple image
fusion. We demonstrate that our HDRI acquisition algorithm is
accurate even for images with large motion.
INTRODUCTION
BY adapting to lights in any viewing condition, the human
visual system can capture a wide dynamic range of
irradiance (about 14 orders in log unit), while the dynamic
range of CCD or CMOS sensors in most of today’s cameras
does not cover the perceptional range of real scenes. It is
important in many applications to capture a wide range of
irradiance of natural scene and store it in each pixel.
In the application of CG, a high dynamic range image
(HDRI) is widely used for high quality rendering with image
based lighting [1][2]. Nowadays HDR imaging technologies
have been developed and some high dynamic range sensors
are commercially available. They are used for in-vehicle cam-
eras, surveillance in night vision, and camera-guided aircraft
docking [1], [4], high contrast photo development [3], robot
vision [5] etc.
[b]M ULTIPLE E XPOSURE F USION
A. Overview[/b]
The HDRI is constructed by combining multiple images.
The procedure for the HDRI acquisition is informally de-
scribed as follows.
1) The images are acquired with different exposure settings.
In our method, we assume that the exposures are set by
changing shutter speed while the aperture is fixed, and we
obtain a set of ordinary low dynamic range images with
8 bits/channel. In general there is a nonlinear relationship
between the pixel values of the 8 bit images acquired by a
camera and the values of actual irradiance. To compensate for
this nonlinearity, the photometric camera calibration described
in Section II-B is performed for the input images.
2) We select a main image from the multiple exposure
images. For each of the other images, the displacement from
the main image, which is mainly due to object movements, is
found. In practice we select an image with medium exposure
as the main image in a default setting. Furthermore occlusions
and under- and over-exposed regions are found for the images.
This is done by the MAP based motion compensation method
in Section III.
M OTION C OMPENSATION
A. MRF Model
L in (2) can be derived by g −1 only when pixels are not
under- or over-exposed. Moreover (4) is effective only when a
scene is static. Since moving objects cause the motion blur, it is
required to compensate the displacement as much as possible.
The compensation is usually done by two steps, global motion
and local displacement compensation steps. In our method
we assume that the global motion (e.g. motion caused by
camera shake) has been already compensated by some image
alignment algorithm [1]. In this section we focus on the local
displacement compensation.