Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Multi-level Image Fusion
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
ABSTRACT
Signal-level image fusion has in recent years established itself as a useful tool for dealing with vast amounts of image data obtained by disparate sensors. In many modern multisensor systems, fusion algorithms significantly reduce the amount of raw data that needs to be presented or processed without loss of information content as well as provide an effective way of information integration. One of the most useful and widespread applications of signal-level image fusion is for display purposes. Fused images provide the observer with a more reliable and more complete representation of the scene than would be obtained through single sensor display configurations. In recent years, a plethora of algorithms that deal with the problem of fusion for display has been proposed. However, almost all are based on relatively basic processing techniques and do not consider information from higher levels of abstraction. As some recent studies have shown this does not always satisfy the complex demands of a human observer and a more subjectively meaningful approach is required. This paper presents a fusion framework based on the idea that subjectively relevant fusion could be achieved if information at higher levels of abstraction such as image edges and image segment boundaries are used to guide the basic signal-level fusion process. Fusion of processed, higher level information to form a blueprint for fusion at signal level and fusion of information from multiple levels of extraction into a single fusion engine are both considered. When tested on two conventional signal-level fusion methodologies, such multi-level fusion structures eliminated undesirable effects such as fusion artefacts and loss of visually vital information that compromise their usefulness. Images produced by inclusion of multi-level information in the fusion process are clearer and of generally better quality than those obtained through conventional low-level fusion. This is verified through subjective evaluation and established objective fusion performance metrics.
Keywords: image fusion, signal-level fusion, feature-level fusion, multi-level fusion
1. INTRODUCTION
The emergence and relatively fast proliferation of multisensor arrays has established multisensor information fusion 1 and image fusion in particular as important areas of research. The availability of reliable and accurate imaging sensors at ever decreasing prices have combined to make the use of multisensor arrays a widespread practice in applications such as remote sensing, night vision, medical imaging and security and surveillance. Integration into systems of disparate sensors as independent information sources can improve their performance and add greatly to their robustness. Multisensor arrays cover broader portions of the spectrum, can offer higher sensitivity and resolution and are generally more reliable than single sensor suites, especially in adverse conditions 2. However, the introduction of additional sensors into an existing array, although generally advantageous, introduces a number of characteristic problems. One of the most significant is the problem of data overload. In practice, additional sensors can increase the amount of data produced by the array to the levels with which the processing power of the system cannot cope. Similarly, presenting human observers with more than one visual cue simultaneously causes confusion and diminishes their performance 3.
Signal-level image fusion is an established tool for dealing with vast amounts of image data obtained by disparate sensors. Fusion algorithms provide for a significant reduction in the amount of raw data without loss of information content. Additionally, fusion provides effective information integration, which is not possible when processing single sensor outputs individually. This is of particular importance in one of the most useful and widespread applications of signal-level image fusion, for display purposes. Fused images provide the observer with a better and more complete representation of the scene than would be obtained through otherwise ineffective single sensor display configurations 2. In response to this, a plethora of algorithms that perform fusion for display have been proposed.
One of the earliest multisensor image fusion methods, proposed by Toet 3, was based on multiresolution image analysis using contrast pyramids. Discrete Wavelet Transform was first used in image fusion purposes in a system by Li et. al. 4,
and a thorough investigation into DWT and other related multiscale and multiresolution fusion methods is provided by Zhang and Blum in 13. Multiscale fusion for visual display was considered, with additional orientation sensitivity by Peli et. al. 5 and in an efficient real time framework by Petrović and Xydeas 6. More recently, a fusion system based on wavelet transform modulus maxima was proposed by Qu et. al. 7 and a multiscale method based on morphological towers was presented by Mukhopadhyay and Chanda 8 for fusion of medical images. However, almost all fusion methods presented so far are based on relatively basic processing techniques and do not consider subjectively relevant information from higher levels of abstraction. As some recent studies have shown this does not always satisfy the complex demands of a human observer 9-11 and a more subjectively meaningful approach is required.
This paper presents some initial results of a fusion framework based on the idea that subjectively relevant fusion could be achieved if information at higher levels of abstraction such as image edges and image segment boundaries are used to guide the basic signal-level fusion process. Fusion of processed, higher level information to form a blueprint for fusion at signal level and of information from multiple levels of extraction into a single fusion engine are both considered. Such a multi-level approach is able to eliminate undesirable effects such as fusion artefacts and loss of vital visual information, generally improve the quality of the fused image and overall reliability of the fusion process.
The next section describes the general structure of image based information fusion and outlines two basic techniques for signal-level image fusion, which were tested within the multi-level fusion framework. Section 3 outlines the proposed multi-level image fusion system structure as well as methods for combining information from different levels of abstraction. Preliminary results of the proposed multi-level fusion systems are presented and discussed in section 4 while the paper is concluded in section

Download full report
http://www.imagefusionpublications/proce...ie2003.pdf