28-10-2016, 12:03 PM
1462000113-paperjournal.odt (Size: 98.59 KB / Downloads: 8)
Abstract: Video surveillance systems are becoming increasingly important for crime investigation and the number of cameras installed in public space is increasing. However, many cameras installed at fixed positions are required to observe a wide and complex area. Detection of suspicious human behaviour is of great practical importance. Due to random nature of human movements, reliable classification of suspicious human movements can be very difficult. Defining an approach to the problem of automatically tracking people and detecting unusual or suspicious movements in Closed Circuit TV (CCTV) videos is our primary aim. We are proposing a system that works for surveillance systems installed in indoor environments like entrances/exits of buildings, corridors, etc. Our work presents a framework that processes video data obtained from a CCTV camera fixed at a particular location.
Controlling home appliances remotely with mobile applications have started becoming quite popular due to the exponential rise in use of mobile devices. There have been so many applications that exploit the use of GSM/GPRS facility of the handset. Many automated systems has been developed which informs the owner in a remote location about any intrusion or attempt to intrude in the house
The development of an Android application which interprets the message a mobile device receives on possible intrusion and subsequently a reply (Short Message Service) SMS which triggers an alarm/buzzer in the remote house making others aware of the possible intrusion. Using threshold value the detected pixel is identified. Hence the movement of the object is identified accurately. After motion detection it will send GCM alert to the android mobile application.
INTRODUCTION
Visual object detection and tracking are important components of video analytics (VA) in multi-camera surveillance. This paper proposes a framework for achieving these tasks in a multi-camera network. Theproposed system configuration is different from existing multi-camera
surveillance systems in [13, 31, 36] which utilize common image information extracted from similar field of views (FOVs) to improve the object detection and tracking performance.However, in practice, such camera setup may not be easily achieved because of economical concern, topology limitation, etc. Therefore, we focus on the non-overlapping multi-camera scenario in this paper, and our main objective is to develop
reliable and robust object detection and tracking algorithms for such environment.
Automatic object detection is usually the first task in a multi-camera surveillance system and background modeling(BM) is commonly used to extract predefined information such as object’s shape, geometry and etc., for further processing.Pixel-based adaptive Gaussian mixture modeling (AGMM) is one of the most popular algorithms for BM where objectdetection is formulated as an independent pixel detectionproblem. It is invariant to gradually light change, slightly moving background and fluttering objects. However, it usually yields unsatisfactory foreground information (object mask) for object tracking due to sensor noise and inappropriate GM update rate, which will lead to holes, unclosed shape and inaccurate boundary of the extracted object. While sensor noise can be suppressed through appropriate filtering, it is difficult to find an optimum update rate of the model because different objects behave differently in the scene. Furthermore, important information of the object such as edge and shape are not utilized in such method. Therefore, the performance of subsequent operations such as object tracking and recognition will bedegraded. In this paper, a mean shift (MS)-based segmentation
algorithm is proposed for improving the object mask obtained by AGMM. By using the segmentation information,holes within the mask can be significantly reduced through inpainting and better alignment between the object boundary and those of the mask can be obtained. Occlusion of moving objects is a major problem in multi-camera surveillance systems. In existing multi-camera surveillance systems, occlusion problem is addressed by fusing the BM information obtained from the overlapped image information in adjacent cameras. These approaches, however, are not directly applicable to our non-overlapping setup. Therefore we propose to use stereo cameras, which offer additional depth information to resolve the occlusion problem.
LITERATURE SURVEY
It consists of multiple intelligent stereo video surveillance cameras which are connected to a local PC-based video processing server. To support high-level VA tasks and other processing algorithms, the server is equipped with an INTEL Core i7 920 CPU with4GB RAM, GTX295 GPU and storage. Interested readers are referred to [12] for more details. Fig.1 shows the block diagram of the proposed algorithms, which consists of three major blocks.
Detection Block: It takes in a new video frame captured by a stereo camera and performs object extraction. The BM is firstly implemented for foreground/background labeling of each captured pixel using the AGMM algorithm. Then the proposed object segmentation is used to obtain the mask of newly identified objects. Finally, the weighted color histogram [3] of the object can be extracted by the object mask.
Tracking Block: It uses the proposed BKF-SGM-IMS algorithm for tracking. The latter employs the GM and MS tracker to model the state and noise densities and estimate the location of the tracked objects. This results in a bank of parallel MS trackers which significantly reduces the possibility of converging to the background with similar color information as the tracked objects. The weighted color histogram of newly detected object is used in the improved MS. Moreover, the object mask obtained from the detection block is used to update the bounding box and color histogram of the tracked objects.
Recognition Block: In this block, the color appearance and shape information of the object are modeled by superpixel representation using the object mask obtained in the previous steps. Finally, the SP-EMD distance metric is used to perform matching between the query image and a reference either obtained from a database or adjacent cameras for network-wise tracking. For generality, prior training of the object appearance is assumed to be unavailable, though training-based algorithms
can also be used.
RESULTS
The performance of the proposed method is evaluated and compared with other methods in terms of correct object recognition rate and processing times.Results the comparison result on average recognition rate. It can be seen that our method has the highest recognition rate in both tests. The high recognition rate is due mainly to the local representation of the appearance of the object. On the contrary, histogram based methods cannot fully utilize the shape and edge information and they model the appearance as global
representation.
Compare to other methods, our algorithm is very efficient. On average, it only takes 0.08 second to process one object. Therefore, it is practical to be used in real-time video surveillance systems, after appropriate acceleration using multiple thread processing or GPU. The tracked objects and object masks in all views for recognition can be found in our application.
CONCLUSION
New approaches for object detection and tracking in camera network has been presented. A novel object detection algorithm using color based MS segmentation and depth information is first proposed for improving background modeling and segmentation of occluded objects. The segmented objects are then tracked by BKF-SGM-IMS. Finally, a non-training-based object recognition algorithm based on SP-EMD distortion metric is presented for identification of similar object extracted in nearby cameras to achieve network-based tracking. The usefulness of the proposed algorithms is illustrated by experimental results and comparison with conventional methods.
FUTURE WORK
Though this project has many added advantage, in future we like to upgrade this into the next level that is not only by just viewing the captured image, we can also view the entire clip of what happened and what has been captured. All this will be done just at the spontaneous moment, within seconds of the action been happened at the site.