Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: PROCESSING OF SATELLITE IMAGE USING DIGITAL IMAGE PROCESSING
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
PROCESSING OF SATELLITE IMAGE USING DIGITAL IMAGE PROCESSING



[attachment=65196]


ABSTRACT

Pictures are the most effective means of conveying information. A picture is worth a
thousand words. Pictures concisely convey information about positions, sizes and interrelationships
between objects. Human beings are good at deriving information from such
images, because of our innate visual and mental abilities. About 75% of the information
received by human is in pictorial form.
The paper describes the basic technological aspects of digital image processing with
special reference to satellite image processing. Basically all satellite images processing
information can be grouped into three categories.
Image rectification and restoration
Image enhancement
Information extraction
The former deals with initial processing of raw image data to correct for geometric distortion,
the enhancement procedures are applied to image data in order to effectively display the data
for subsequent visual interpretation. The intent of classification process is to categorize all
pixels in a digital image into one of several land cover classes or themes. This classified data
may be used to produce thematic maps of the land cover present in an image.
Our paper comprises of the above stated actions on the data from satellite image
(IRS –P6) by LISS-III sensor of 23.5mresolution of 55o7 region on topo sheet. Thus,
working on the satellite image we extracted information which has brought us to valuable
conclusions, which reveals how image processing can be maneuvered


INTRODUCTION

Our discussion will be focussing on analysis of remotely sensed images. These Images are
represented in digital form. When represented as numbers, brightness can be added,
subtracted, multiplied, divided and, in general, subjected to statistical manipulations that are
not possible if an image is presented only as a photograph. Previously, digital remote sensing
data could be analyzed only at specialized remote sensing laboratories. Specialized
equipment and trained Personnel necessary to conduct routine machine analysis of data were
not widely available, in part because of limited availability of digital remote sensing data and
a lack of appreciation of their qualities


DIGITAL IMAGE

A digital remotely sensed image is typically composed of picture elements (pixels) located at
the intersection of each row i and column j in each K bands of imagery. Associated with each
pixel is a number known as Digital Number (DN) or Brightness Value (BV) that depicts the
average radiance of a relatively small area within a scene (Fig. 1). A smaller number
indicates low average radiance from the area and the high number is an indicator of high
radiant properties of the area. The size of this area effects the reproduction of details within
the scene. As pixel size is reduced more scene detail is presented in digital representation.


COLOR COMPOSITES

While displaying the different bands of a multi spectral data set, images obtained in different
bands is displayed in image planes (other than their own) the color composite is regarded as
False Color Composite (FCC). High spectral resolution is important when producing color
components. For a true color composite an image data used in red, green and blue spectral
region must be assigned bits of red, green and blue image processor frame buffer memory. A
color infrared composite ‘standard false color composite’ is displayed by placing the infrared,
red, green in the red, green and blue frame buffer memory (Fig. 2). In this healthy vegetation
shows up in shades of red because vegetation absorbs most of green and red energy but
reflects approximately half of incident Infrared energy. Urban areas reflect equal portions of
NIR, R & G, and therefore they appear as steel grey


IMAGE RECTIFICATION & RESTORATION


Geometric distortions manifest themselves as errors in the position of a pixel relative to other
pixels in the scene and with respect to their absolute position within some defined map
projection. If left uncorrected, these geometric distortions render any data extracted from the
image useless. This is particularly so if the information is to be compared to other data sets, is
it from another image or a GIS data set. Distortions occur for many reasons. For instance
distortions occur due to changes in platform attitude (roll, pitch and yaw), altitude, earth
rotation, earth curvature, panoramic distortion and detector delay. Most of these distortions
can be modelled mathematically and are removed before you buy an image. Changes in
attitude however can be difficult to account for mathematically and so a procedure called
image rectification is performed. Satellite systems are however geometrically quite stable and
geometric rectification is a simple procedure based on a mapping transformation relating real
ground coordinates, say in easting and northing, to image line and pixel coordinates


V IMAGE ENHANCEMENT


Image enhancement techniques improve the quality of an image as perceived by a human.
These techniques are most useful because many satellite images when examined on a colour
display give inadequate information for image interpretation. There exists a wide variety of
techniques for improving image quality. The contrast stretch, density slicing, edge
enhancement, and spatial filtering are the more commonly used techniques. Image
enhancement is attempted after the image is corrected for geometric and radiometric
distortions. Image enhancement methods are applied separately to each band of a multi
spectral image. Digital techniques have been found to be most satisfactory than the
photographic technique for image enhancement, because of the precision and wide variety of
digital processes


VI INFORMATION EXTRACTION

Image Classification
The overall objective of image classification is to automatically categorize all pixels in an
image into land cover classes or themes. Normally, multi spectral Data are used to perform
the classification and the spectral pattern present within the data for each pixel is used as
numerical basis for categorization. That is, different feature types manifest different
combination of DN s based on their inherent spectral reflectance and emittence properties.
The term classifier refers loosely to a computer program that implements vary so greatly.
Therefore, it is essential that the analyst understands the alternative Strategies for image
classification.
The traditional methods of classification mainly follow two approaches: unsupervised and
supervised. The unsupervised approach attempts spectral grouping that may have an unclear
meaning from the user’s point of view. Having established these, the analyst then tries to
associate an information class with each group. The unsupervised approach is often referred
to as clustering and results in statistics that are for spectral, statistical clusters. In the
supervised approach to classification, the image analyst supervises the pixel categorization
process by specifying to the computer algorithm; numerical descriptors of the various lands
cover types present in the scene. To do this, representative sample sites of known cover types,
called training areas or training sites, are used to compile a numerical interpretation key that
describes
The spectral attributes for each feature type of interest. Each pixel in the data set is then
compared numerically to each category in the interpretation key and labelled with the name
of the category it looks most like. In the supervised approach the users defines information
categories and then examine their spectral separatebility whereas in the unsupervised
approach he first determines spectrally separable classes and then defines their informational
utility.


CONCLUSION


Digital image processing of satellite data can be primarily grouped into three categories:
o Image Rectification and Restoration,
o Enhancement and
o Information extraction.
Image rectification is the pre-processing of satellite data for geometric and radiometric
connections. Enhancement is applied to image data in order to effectively display data for
subsequent visual interpretation. Information extraction is based on digital classification and
is used for generating digital thematic map