Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: DIGITAL IMAGE PROCESSING: APPLICATION FOR ABNORMAL INCIDENT DETECTION
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
DIGITAL IMAGE PROCESSING: APPLICATION FOR ABNORMAL INCIDENT DETECTION
[attachment=27564]
Abstract
Intelligent vision systems (IVS) represent an exciting part of modern sensing, computing, and engineering systems. The principal information source in IVS is the image, a two dimensional representation of a three dimensional scene. The main advantage of using IVS systems is that the information is in a form that can be interpreted by humans.
Our paper is an image process application for abnormal incident detection, which can be used in high security installation, subways, etc. In our work, motion cues are used to classify dynamic scenes and subsequently allow the detection of abnormal movements, which may be related critical situations.
Successive frames are extracted from the video stream and compared. By subtracting the second image from the first, that difference image is obtained. This is the segmented to aid error measurement and thresholding. If is the threshold is exceeded, the human operator is alerted. So, that he / she may take remedial action. Thus by processing the input image suitably, our system alerts operators to any abnormal incidents, which might lead to critical situations.
1. INTRODUCTION
1.1. Need for automated Surveillance
Motion-based automated surveillance or intelligent scene-monitoring systems were introduced in the recent past. Video motion detection and other similar systems aim to alert operators or start a high-resolution video recording when the motion conditions of a specific area in the scene are changed.
In recent years interest in automated surveillance systems has grown dramatically as the advances in image processing and computer hardware technologies have made it possible to design intelligent incident detection algorithms and implemented them as real-time systems. The need for such equipment has been obvious for quite some time now, as human operators are unreliable, fallible and expensive to employ.
1.2. Motion analysis for incident detection
Interest in motion processing has increased with advance in motion analysis methodology and processing capabilities. The concept of automated incident detection is based on the idea of finding suitable image cues that can represent the specific event of interest with minimum overlapping with other classes. In this paper, motion is adopted as the main cue for abnormal incident detection.
1.3. Image acquisition
Obtaining the images is the first step in implementing the system.
1.4. Camera position
The camera is placed at a fixed height in the subway or corridor. This portion need not be changed along the course of operation.
1.5. Frame extraction
In this system, motion is used as the main cue for abnormal incident detection. It is henceforth obvious that the first concern is obtaining the images required from the source. In the circumstances described (subways, high security installations) usually a closed circuit television system is employed.
Any ordinary video systems use 25 frames per second. The system described here uses scene motion information extracted at the rate of 8.33 times per second. These amounts to capturing a frame once every two frames in the video camera system. In practical real time operation a hardware block-matching motion detector is used for frame extraction.
2. THE DIFFERENCE IMAGE:
There are two major approaches to extracting two-dimensional motion from image sequential optical flow and motion correspondence. Simple subtraction of images acquired at different instants in time makes motion detection possible, when there is a stationary camera and constant illumination. Both of these conditions are satisfied in the areas of application of our system.
A difference image is nothing but a binary image d (i , j) where non-zero values represent image areas with motion, that is areas where these was a substantial difference between gray levels in consecutive images p1 and p2:
d (i, j) = 0 if  p1 (i, j) – p2 (i, j)  <= ε
= 1 otherwise
Where ε is a small positive number. The figure 3 shows the resultant image obtained by subtracting the images 1 and 2. The threshold level used in the system is 0.8, which is found to be sufficient for obtaining a good binary difference image
Initial position final position
Difference image
The second figure shows a slightly displaced version of the first figure.
The system errors mentioned in the last item must be suppressed. If it is required to find the direction of motion, it can do by constructing a cumulative difference image. This cumulative difference image can be constructed from a sequence
of images. This , however is not necessary our system as the direction of motion is invariably the same.
Obtaining the difference image is simplified in the MATLAB image processing Toolbox. The input images are read using the ‘imread’ function and converted to a binary image using the ‘ im2bw’ function. The ‘ im2bw’ function converts the input image to gray scale image. The output binary image BW is 0.0 (black) for all pixels in the input image with luminance less than a user defined level and 1.0 (white) for all other pixels.
2.1. Segmentation details
This says about the segmentation details of the object. There are two basic forms of segmentation.
• Complete Segmentation.
• Partial Segmentation.
2.1.1. Complete and partial segmentation
Complete segmentation results in a set of disjoint region corresponding uniquely with objects in the input image. In partial segmentation, the regions may not correspond directly with the image objects.
If partial segmentation is the goal, an image is divided into separate regions that are homogeneous with respect to a chosen property such as brightness, color, reflectivity, texture etc.
Segmentation methods can be divided into three groups according to the dominant features they employ. First is global knowledge about an image or its past; edge-based segmentation forms the second group and region based segmentation, the third. In the second and third group each region can be represented by its closed boundary and each closed boundary describes the region. Edge based segmentation methods find the borders between regions while region based methods construct regions directly.
Region growing techniques are generally better in noisy images where borders are not very easy to detect. Homogeneity is an important property of regions and is used as the main segmentation criterion in region growing, where the basic idea is to divide an image into to zones of maximum homogeneity.
A complete segmentation of an image R is a finite set of regions R1...Rs ,
s
R = U Ri Ri ∩ Rj = Φ i ≠ j
i =1
Further, for region-based segmentation, the conditions need to be satisfied.
H (Ri) = TRUE i = 1,2,s
H (Ri U Rj) = FALSE i≠j
Ri adjacent to Rj where S is the total number of regions in an image and H(Ri) is a binary homogeneity evaluation of region Ri. Resulting regions of the segmented image must be both homogeneous and maximal where ‘maximal’ means that the homogeneity criterion would not be true after merging a region with any adjacent region.

Guest

please send me ppt on abnormal incident dection

Guest

How to i make DIGITAL IMAGE PROCESSING: APPLICATION FOR ABNORMAL INCIDENT DETECTION in my college