22-05-2014, 02:42 PM
Image Processing and Pattern Matching
Image Processing .doc (Size: 368 KB / Downloads: 155)
Abstract
The growth of the Electronic Media, Process Automation and especially the outstanding growth of attention to national and personal security in the past few years have all contributed to the growing need of being able to automatically detect features and occurrences in pictures and video streams on a massive scale, without the need for human eye intervention and in real time. To date, all technologies available for such automated processing have come short of being able to supply a solution that is both technically viable and cost-effective.
This white paper details the basic ideas behind a novel, patent-pending technology called Image Processing over IP networks (IPoIP™). As its name implies, IPoIP provides a solution for automatically extracting useful data from a large number of simultaneous image (video or still) inputs connected to an IP network, but unlike other existing methods, does so at reduced costs without compromising reliability. The document will also outline the existing image-processing architectures and compare them to IPoIP. Ending this document will be a short chapter detailing several possible implementations of IPoIP in existing applications.
Introduction
A tremendous amount of research effort has been put into the ability to extract meaningful data out of captured images (both video and still) in the past years. As a result, a large number of proven algorithms exist both for real-time and offline applications, algorithms that are implemented on platforms ranging from pure software to pure hardware. These platforms, however, are generally designed to deal with a relatively small number of simultaneous image inputs (in most cases actually no more than one). They are designed in one of two main architectures: Local Processing and Server Processing.
IPoIP Architecture
The IPoIP architecture was designed to answer the needs defined above with the following key goals in mind:
• Providing a cost effective solution for image processing applications over a large number of cameras without sacrificing detection probability or increasing False Alarm Rate (FAR).
• Enabling the application of any algorithm to any camera even if it is in a geographically remote location with limited supporting facilities.
• Providing the ability to apply a wide range of algorithms simultaneously to any camera without limiting the user to only a single application at a time.
Feature Analysis At the Central Server
The main part of the processing is performed by the IPoIP server. The server is able to dynamically request specific features from each camera, according to the requirements of the specific algorithms that are currently being applied.
The server analyzes the feature data that is collected from each camera, and dynamically allocates computational resources as needed. In this way the server is able to utilize large-scale system statistics to perform very complex tasks when needed, without requiring a huge and expensive network for support.
The part of each algorithm that runs on the server performs the following main tasks:
1. Request specific features from the remote UFE.
2. Analyze the incoming features over time and extract meaningful “objects” from the scene.
3. Track all moving objects in the scene in terms of size, position and speed and calibrate all of This data into real word coordinates. The calibration process transforms the 2 dimensional data Received from the sensors into 3 dimensional data using various calibration techniques. Many Such techniques can be implemented in accordance with the specific scene being analyzed.