17-11-2012, 04:07 PM
Image fusion-based contrast enhancement
1Image fusion-based contrast.pdf (Size: 9.29 MB / Downloads: 144)
Abstract
The goal of contrast enhancement is to improve visibility of image details without introducing unrealistic visual
appearances and/or unwanted artefacts. While global contrast-enhancement techniques enhance the overall
contrast, their dependences on the global content of the image limit their ability to enhance local details. They
also result in significant change in image brightness and introduce saturation artefacts. Local enhancement
methods, on the other hand, improve image details but can produce block discontinuities, noise amplification and
unnatural image modifications. To remedy these shortcomings, this article presents a fusion-based contrastenhancement
technique which integrates information to overcome the limitations of different contrastenhancement
algorithms. The proposed method balances the requirement of local and global contrast
enhancements and a faithful representation of the original image appearance, an objective that is difficult to
achieve using traditional enhancement methods. Fusion is performed in a multi-resolution fashion using Laplacian
pyramid decomposition to account for the multi-channel properties of the human visual system. For this purpose,
metrics are defined for contrast, image brightness and saturation.
Introduction
The limitations in image acquisition and transmission
systems can be remedied by image enhancement. Its
principal objective is to improve the visual appearance
of the image for improved visual interpretation or to
provide better transform representations for subsequent
image processing tasks (analysis, detection, segmentation,
and recognition). Removing noise and blur,
improving contrast to reveal details, coding artefact
reduction and luminance adjustment are some examples
of image enhancement operations.
Achromatic contrast is a measure of relative variation
of the luminance. It is highly correlated to the intensity
gradient [1]. There is, however, no universal definition
for the contrast. It is well established that human contrast
sensitivity is a function of the spatial frequency;
therefore, the spatial content of the image should be
considered while defining the contrast. Based on this
property, the local band-limited contrast is defined by
assigning a contrast value to every point in the image
and at each frequency band as a function of the local
luminance and the local background luminance [2].
Another definition accounts for the directionality of the
human visual system (HVS) in defining the contrast [3].
Two definitions of contrast measure for simple patterns
have been commonly used.
Image quality measures
Contrast-enhancement algorithms achieve different
amounts of detail preservation. Contrast enhancement
can lead to colour shift, washed out appearance and
saturation artefacts in regions with high signal activity
or textures. Such regions should receive less weight,
while the areas with greater details or with low signal
activity should receive higher weight during fusion. We
define image quality measures which guide the fusion
process. These measures are consolidated into a scalar
weight map to achieve the fusion goals described above.
This section is organized as follows. We first define
metrics to measure the contrast and luminance of the
enhanced images. Next, the computation of a scalar
weight map is explained.
Scalar weight map
The problem of synthesizing a composite/fused image
translates into the problem of computing the weights for
the fusion of the source images. A natural approach is to
assign to each input image a weight that depends increasingly
on the salience (importance for the task at hand).
Measures of salience are based on the criteria for the particular
vision task. The salience of a component is high if
the pattern plays a role in representing important information.
For the proposed fusion application, the less contrasted
and saturated regions should receive less weight
(less salience), while interesting areas containing bright
colours and details (high visual saliency) should have
high weight. Based on this requirement, weights (for the
fusion process) are computed by combining the measures
defined (according to the visual saliency) for the contrast,
saturation and luminance. We combine these measures
(contrast, luminance) into a weight map using multiplication
(AND) operation.
Results and discussion
This section presents the simulation results obtained
with the proposed fusion method and compares it to
other methods. The criteria of comparison are (1) contrast
enhancement and (2) the extent to which each
algorithm preserves the original image appearance (in
the sense that it should not produce any new details or
structure on the image) without introducing unwanted
artefacts. The comparison criteria and verification procedure
include quantitative measures as well as visual
inspection. The first part briefly describes the metrics to
assess the performance of contrast-enhancement methods.
These metrics are then used to compare the results
of the proposed approach with other existing methods
of contrast enhancement.
Grayscale image enhancement
The proposed method is applied to various grayscale
images by fusing the output from local and global contrast-
enhancement methods. For testing the results, we
select the output of three enhancement algorithms for
performing fusion, i.e. the HE method, CLAHE and the
imadjust function. HE spreads out intensity values over
the brightness scale in order to achieve higher contrast.
It is suited for images that are low contrasted (narrow
histogram centred towards the middle of the gray scale),
dark (histogram components concentrated towards the
low side of the gray scale) and bright (where the components
of the histogram are biased towards the high side
of the gray scale). However, for images with narrow histogram
and fewer grayscales, the increase in the
dynamic range results in an adverse effect of increased
graininess and patchiness.
Colour image enhancement
The evolution of photography has increased the interest
in colour imaging and consequently in colour contrastenhancement
methods. The goal of colour contrast
enhancement in general is to produce appealing image
or video with vivid colours and clarity of details intimately
related to different attributes of visual sensation.
Most techniques used for colour contrast enhancement
are similar to those for grayscale images [28,29]. However,
most of these methods extract image details but
can lead to colour temperature alterations, light condition
changes and may result in unnaturally sharpened
images.
Conclusion and perspectives
This article presents a novel fusion-based contrastenhancement
method for grayscale and colour images.
This study demonstrates how a fusion approach can
provide the best compromise between the different attributes
of contrast enhancement in order to obtain perceptually
more appealing results. In this way, we can
fuse the output of different traditional methods to provide
an efficient solution. Results show the effectiveness
of the proposed algorithm in enhancing local and global
contrasts, suppressing saturation and over-enhancement
artefacts while retaining the original image appearance.
The aim is not to compare different fusion methodologies
or performance comparison using different quality
metrics, but rather to introduce the idea of improving
the performance of image enhancement methods using
image fusion.