16-07-2011, 04:32 PM
imgage enhancement seminar report.doc (Size: 130 KB / Downloads: 179)
1. An Introduction to Image Enhancement
Image enhancement improves the quality (clarity) of images for human viewing.
Removing blurring and noise, increasing contrast, and revealing details are examples of enhancement operations. For example, an image might be taken of an endothelial cell, which might be of low contrast and somewhat blurred. Reducing the noise and blurring and increasing the contrast range could enhance the image. The original image might have areas of very high and very low intensity, which mask details. An adaptive enhancement algorithm reveals these details. Adaptive algorithms adjust their operation based on the image information (pixels) being processed. In this case the mean intensity,
Contrast and sharpness (amount of blur removal) could be adjusted based on the pixel intensity statistics in various areas of the image.
1.1 Defination of Image Enhancement:
The principle objective of enhancement is to process an image so that the result is more suitable than the original image for a specific application.
Image enhancement is one of the most interesting and visually appealing areas of image processing.
1.2 Image enhancement approaches fall into two broad categories:
1.2.1 Spatial domain and intensity transformations
1.2.2 Frequency domain approaches
1.2.1 Spatial domain:
• Image plane
• Image processing methods based on direct manipulation of pixels
• Two principal image processing technique classifications
1. Intensity transformation methods
2. Spatial filtering methods
Background
1.2.1a. spatial domain
1. Aggregate pixels composing an image
2. Computationally more efficient and require less processing resources for implementation
1.2.1b spatial domain processes denoted by the expression
1. g(x, y) = T[f(x, y)]
2. f(x, y) is input image
3. g(x, y) is output image
4. T is an operator on f, defined over some neighborhood of f(x, y)
5. T may also operate on a set of images (adding two images)
1.2.1c Neighborhood of a point (x, y)
1. Square or rectangular sub image area centered at (x, y)
2. Typically, the neighborhood is much smaller than the image
3. Center moves over each pixel in the image
4. T is applied at each point to get g at that location
5. Compute the average intensity of the neighborhood
6. Also possible to have neighborhood approximations in the form of a circle
7. The above application is also called spatial filtering
8. Neighborhood may be extracted by a spatial mask, or kernel, or template, or window.
1.2.1d Single pixel neighborhood or point processing techniques
1. Simplest form of T
2. Smallest possible neighborhood of size 1 × 1
g depends only on the value of f at a single point (r, c)
3. Gray-level transformation function of the form
s = T®
4. r and s denote the gray level of f(x, y) and g(x, y)
5. Thresholding
6. Contrast stretching
1.2.1e larger pixel neighborhoods (mask processing or filtering)
1. a neighborhood around (x, y) to determine the value of g(x, y)
2. Neighborhood defined by masks, filters, kernels, templates, or windows (all refer to the same thing)
3. A kernel is a small 2D array whose coefficients determine the nature of the process