03-05-2013, 04:55 PM
A GRADIENT-BASED HYBRID IMAGE FUSION SCHEME USING OBJECT
EXTRACTION
A GRADIENT-BASED HYBRID.pdf (Size: 668.42 KB / Downloads: 37)
ABSTRACT
This paper presents a new hybrid image fusion scheme that
combines features of pixel and region based fusion, to be
integrated in a surveillance system. In such systems, objects can be
extracted from the different set of images due to background
availability, and transferred to the new composite image with no
additional processing usually imposed by other fusion approaches.
The background information is then fused in a multi-resolution
pixel-based fashion using gradient-based rules to yield a more
reliable feature selection. According to Piella and Petrovic
quantitative evaluation metrics, the proposed scheme exhibits a
superior performance compared to existing fusion algorithms.
INTRODUCTION
The increased importance gained by security and surveillance
systems over the recent years has motivated the research
community to investigate and explore more effective and flexible
solutions to the challenges imposed by such systems.
Consequently, image fusion has drawn a lot of attention and
became a major part of any surveillance system, due to its
important role in combining complementary information from
different sources of optical sensors (e.g. Visible and Infrared) into
one composite image or video, thus minimizing the amount of data
that needs to be stored while preserving all the salient features of
the source images, and more importantly, enhancing the
informational quality of the surveyed scene. The process of image
fusion must ensure that all the salient information present in the
source images are transferred to the composite image. Information
fusion can be performed at three levels: pixel, object, and decision
level. A number of pixel-level fusion techniques, in which the
source images are processed and fused on a pixel basis or
according to a small window in the neighborhood of that pixel can
be found in literature.
BACKGROUND AND RELATED WORK
Pixel-Based Fusion
The process of combining different types of source images at
each pixel location (or according to a small window in the
neighborhood of that pixel) is called pixel-level fusion. The most
commonly used techniques can be grouped into two major
categories: arithmetic and biologically-based methods. The
weighted combination (e.g. Average) and principle component
analysis (PCA [6]) are widely known in arithmetic pixel-level
fusion. Despite the low complexity and the computational
efficiency, the fused image suffers from a contrast loss and
attenuation of salient features [7]. The limitations of arithmetic
methods led to the development of the biologically-based methods
which are inspired from the human visual system that is sensitive
to local contrast changes such as edges. Multiresolution (MR)
decomposition methods were found to be very effective in
representing such changes and therefore, they were used for the
purpose of image fusion. The process starts by decomposing the
source images into different resolutions so that the features of each
image are represented at different scales.
Region-Based Fusion
The motivation for region-based approaches is derived from the
fact that a pixel usually belongs to an object or a region in an
image. Therefore, it is more reasonable to consider regions instead
of individual pixels. This limitation of pixel-based schemes led to
the development of region-based schemes, in which, the source
images are first segmented to yield a region map comprising all the
regions that constitute the image. A set of fusion rules is then
applied to the corresponding regions from the different set of
images depending on region activity levels and similarity match
measures. Region-Based approaches may help to avoid several
drawbacks found in their pixel-based counterpart such as
sensitivity to noise, blurring effects and misregistration. In [2], the
source images are first decomposed using the MR transform