11-08-2012, 04:26 PM
AUTOMATIC HUMAN TRACKING SYSTEM
AUTOMATIC HUMAN.doc (Size: 875 KB / Downloads: 99)
INTRODUCTION
Video surveillance systems are seeing widespread use in the remote monitoring of people. Principally, the video surveillance system is used as a security system because of its ability to track a particular person. If the function of the video surveillance system is extended to track numerous people, the demands of the system are extended in various ways. Two common examples of such uses are to search for lost children and to gather/analyze consumers’ route pattern for marketing research of a retail establishment. Such video surveillance systems are referred to as “automatic human tracking systems”. The main aim is to show how automatic human tracking systems can be improved by resolving some of the problems of conventional video surveillance system.
Currently existing video surveillance systems have many limitations to their capabilities. In one case, systems have difficulty isolating a number of people located at different position at the same time and track those people automatically. In another, the number of possible targeted people is limited by the extent of users’ involvement in manually switching the view from one video camera to another. Although approaches do exist to increase the efficiency of identifying and tracking particular people in a system comprised of numerous surveillance positions, these approaches demand an increase in the workload of the user since it demands users to identify the target.
SYSTEM FEATURES
The features of the system are shown in Fig.2.1.1 A graphical user interface was developed in order to improve maintainability of the system configuration. The functions of the GUI are to create/edit graphical representation of a building layout, to deploy video cameras on the graphical layout, to monitor mobile agents, and to create data for simulation. The GUI utilizes the algorithm to determine neighbour nodes to compute the adjacency of video cameras, thus information is displayed graphically and the maintainability is improved. The algorithm is utilized not only to calculate the neighbouring video cameras, but also to determine the mobile agent’s next destination accurately. The algorithm is the subject of this paper; it contributes to the reliability of the system by using minimal computing resources. Bypass methods, which can improve the robustness of the automatic human tracking system, are currently being researched utilizing the algorithm. Since the mobile agents utilize these methods, continuous tracking of the target people can be ensured; thus, the reliability and persistency of the system is improved. Lost target re-detection and acquisition methods are currently being researched utilizing the algorithm, the results of which will improve the robustness of systems as well as further increase reliability and persistency. The goal of these re-detection methods will be to ensure continuous tracking of target people by re-detecting if the mobile agents lose track of the people. OSGi and mobile agent technologies are adopted to improve the system scalability. OSGi is a framework developed using the Java Language and server software including web server functions.
SYSTEM CONFIGURATION
The system configuration of the automatic human tracking system is shown in Fig. 2.2.1. It is assumed that the system is installed in a given building. Before a person is granted access inside the building, the person’s information is registered in the system. Through a camera an image of the person’s face and body is captured and registered into the system. Any person who is not registered or not recognized by the system is not allowed to roam inside the building.
This system is composed of an agent monitoring terminal, agent management server, video recording server and feature or person extraction server with video camera. The agent monitoring terminal is used for registering the target person’s information, retrieving and displaying the information of the initiated mobile agents, and displaying video of the target entity. The agent management server records mobile agents’ tracking information history, and provides the information to the agent monitoring terminal. The video recording server records all video images and provides the images to the agent monitoring terminal via request. The feature extraction server along with the video camera analyzes the entity image and extracts the feature information from the image.
[b]SIMULATOR AND GRAPHICAL USER INTERFACE[/b]
A simulator is currently being developed in Java Language. The simulator consists of 3 functions, an image processing simulator, an editor for the creation of target simulation routes and a simulation feature data creator.
An agent’s movement path information is displayed on the left side area of the GUI. A map of running agents and each agent’s status are displayed on the center side area. A user sets the configuration, the IP address of the feature extraction server, the accuracy level of feature extraction server, etc on the right side. The GUI can simulate various layouts of the feature extraction server, set the accuracy level of the feature extraction server, and confirm the movement of the agents. Tests of the generator confirmed accuracy of the system with data from 20 cameras. Currently the simulator needs more improvement especially when using more than 20 cameras. This simulator will be adding other necessary functions. And the simulator will also be utilized to verify the performance of the system. As such, the simulator is utilized in the examination.