Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: movement of a video camera
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
movement of a video camera

[attachment=23100]

INTRODUCTION

The movement of a video camera caused by hand shaking or platform movement often introduces jerky image motion that is annoying to human eyes. The undesirable effect of hand shake is even more profound during zooming. The purpose of stabilization is to remove such unwanted motion from the image sequence while preserving the dominant motions such as intentional camera panning.
If we have a camera with a high resolution and the camera is placed in someone’s hand or on a moving object (car, tank etc.), then the resulting video-stream is of very poor quality. An optimal solution is to place the camera on some stable holder, but if the height of such holder is too big, the camera will be exposed to the influences of the surroundings, e.g. wind, asperity of the road, waves on the sea etc. Therefore, the image stabilization in a video stream is needed. The significant object is always in the same position in the screen.
We distinguish between two types of movements with the camera:
• Weak shaking (app. ±10° variation in the horizontal and vertical directions and/or some small fractions of Hertz)
• Strong shaking (more than ±10° variation in the horizontal and vertical directions and/or tens of Hertz).
In the first case, the problem with shaking could be solved using pure software (digital) image stabilization. On the other hand, strong shaking is impossible to stabilize only with the software solution, i.e. some additional hardware is needed. This hardware can be based either on a servomotor unit or pneumatic/hydraulic unit, which are able to compensate the movements of the camera system in the opposite direction.
In early years, mechanical image stabilization devices such as accelerators, gyro-sensors, and dampers were used. Thus, many solutions to the problem involve hardware for damping the motion of the camera, such as a Steady-Cam rig or gyroscopic stabilizers. This equipment works well in practice, but it is very expensive.
Optical image stabilizers, which have been developed recently, employ an optical device such as a prism or a movable lens assembly that moves opposite the camera movement. Like the mechanical stabilizers, the optical image stabilizers are generally expensive and lack of the kind of compactness that is crucial for today’s consumer electronic devices. On the contrary, digital stabilizers use video signal processing techniques to solve the problem electronically without resorting to mechanical or optical devices. This type of approaches is more economic for digital video cameras.

In our project we are conducting Real time video stabilisation using Digital Image Stabilisation. A digital image stabilization system first estimates unwanted motion and then applies corrections to the image sequence. An image motion can be estimated using spacio-temporal or region matching approaches. Spacio-temporal approaches include parametric block matching, direct optical flow estimation, and a least mean-square error matrix inversion approach. Region matching methods include bit-plane matching, point-to line correspondence, feature tracking, pyramidal approaches, and block matching.


LITERATURE SURVEY
Image stabilization algorithms mostly consist of three main parts: motion estimation, motion smoothing and motion compensation. A lot of various approaches exist nowadays. The main difference lies in the resultant accuracy of global motion vector and algorithm used for estimation of local motion vector.

Plane matching Algorithm
As stated above, the algorithms that deal with stabilization are based on estimation of a motion represented by a motion vector. Brooks et.al, in their paper ‘Real-Time Digital Image Stabilization’ proposed the straightforward solution leads to use of discrete correlation and cross-correlation. The discrete correlation produces a matrix of elements. The elements with high correlation (value) correspond to the locations where a chosen pattern and image match well. It means that the value of element is a measure of similarity in relevant point and can find locations in an image that are similar to the pattern.
The input is formed by a pattern (in the form of an image) and an input image. It is not necessary to search the whole image, thus a smaller area (search window) is defined. At the same time, this area specifies the maximal shift in vertical and horizontal direction and is chosen in this manner represents a discrete 2D correlation function.


Bit Plane Matching Algorithm
S.H. Jeon et.al. in their paper ‘Fast Digital Image Stabilizer Based on Gray-Coded Bit-Plane Matching’ proposed some algorithms to calculate the correlation from the images passed through an edge detector. This technique provides certain improvements, but the edge detectors tend to produce many useless pixels and are sensitive to the image intensity. The detector introduces additional time-consuming operation to the phase of processing. As most of edge detectors are nonlinear systems, it is not possible to make convolution in the frequency domain.
Better results can be achieved by gray coded bit planes. The gray coding allows the motion estimation vector using a single bit-plane by encoding most of the useful information into a few planes. Bit plane matching algorithm uses the binary operator – exclusive or (XOR).




The error operator is calculated by minimizing the resulting local motion vector. Several (typically four) local motion vectors from each area along with the previous global motion vector are passed through a median operator to produce the current global motion vector estimate. Then, the global motion estimate can be optionally passed through a filter that is tuned in order to preserve intentional camera motion while removing the undesirable high frequency motion. The final filtered motion estimate is used then for shifting the current frame by an integer number of pixels in the opposite direction of the motion.

Edge detection method
In order to improve the estimation of the local motion vector, Martin Drahanský et.al.in ‘Digital Video Stabilization for General Security Surveillance Systems’ experimentally determined that at least two bit planes are suitable to increase the reliability of estimation.
The next improvement lies in the usage of more than one maximum. This solution will be stable in the situations, where the edges have very low contrast. In this, five highest maximum peaks are used in the searching area. This could be taken into the account by the computation of global motion vector.