Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Blurred Image Recognition by Legendre Moment Invariants
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Blurred Image Recognition by Legendre Moment Invariants

[attachment=23306]


INTRODUCTION

Image processing is a very active area that has impacts in many domains from remote sensing, robotics, traffic surveillance, to medicine. Automatic target recognition and tracking, character recognition, 3-D scene analysis and reconstruction are only a few objectives to deal with. Since the real sensing systems are usually imperfect and the environmental conditions are changing over time, the acquired images often provide a degraded version of the true scene.

An important class of degradations we are faced with in practice is image blurring, which can be caused by diffraction, lens aberration, wrong focus, and atmospheric turbulence. In pattern recognition, two options have been widely explored either through a two steps approach by restoring the image and then applying recognition methods, or by designing a direct one-step solution, free of blurring effects. In the former case, the point spread function (PSF), most often unknown in real applications, should be estimated .

In the latter case, finding a set of invariants that are not affected by blurring is the key problem and the subject of this project. The pioneering work in this field was performed and derived invariants to convolution with an arbitrary centro symmetric PSF. These invariants have been successfully used in template matching of satellite images, in pattern recognition, in blurred digit and character recognition, in normalizing blurred images into canonical forms , and in focus/defocus quantitative measurement.

More recently, introduced the combined blur-rotation invariants and reported their successful application to satellite image registration and camera motion estimation. A set of combined invariants which are invariant to affine transform and to blur . The extension of blur invariants to dimensions has also been investigated . All the existing methods to derive the blur invariants are based on geometric moments or complex moments. However, both geometric moments and complex moments contain redundant information and are sensitive to noise especially when high-order moments are concerned.

This is due to the fact that the kernel polynomials are not orthogonal. The use of orthogonal moments to recover the image from moments . It was shown that the orthogonal moments are better than other types of moments in terms of information redundancy, and are more robust to noise. The moment invariants are considered reliable features in pattern recognition if they are insensitive to the presence of image noise. Consequently, it could be expected that the use of orthogonal moments in the construction of blur invariant provides better recognition results.

To the authors’ knowledge, no orthogonal moments have been used to construct the blur invariants. In this project, we propose a new method to derive a set of blur invariants based on orthogonal Legendre moments . The organization of this project is as follows the theory of blur invariants of geometric moments and the definition of Legendre moments. In the relationship between the Legendre moments of the blurred image and those of the original image and the PSF. Based on this relationship, a set of blur invariants using Legendre moments is provided. The experimental results for evaluating the performance of the proposed descriptors.


Image bluring:

Researchers traditionally treat the shape from shading problem without considering the blur introduced by the camera. However, when one captures the images with a camera, the degradation in the form of blur and noise is often present in these observed images. It is natural that the variations in image intensity due to camera blur affects the estimates of the surface shape. Thus, the estimated shape differs from the true shape in spite of possibly having the knowledge of the true surface reflectance model.

This limits the applicability of these techniques in 3D computer vision problems. It is to be mentioned here that all the existing approaches in the literature assume a pinhole model that inherently assumes that there is no camera blur during observation. However the blur could happen due to a variety of reasons such as improper focus setting or camera jitter. This motivates us to restore the image as well, while recovering the structure.

The problem can then be stated as follows: given a set of blurred observations of a static scene taken with different light source positions, obtain the true depth map and the albedo of the surface as well as restore the images for different light source directions. Since the camera blur is not known, in addition, we estimate the blur point spread function (PSF) which caused the degradation.In this paper we assume a point light source illumination with known source directions and an orthographic projection.Due to above, the problem can be classified as a joint blind restoration and surface recovery problem.

Since such a problem is inherently ill-posed, we need suitable regularization of all the fields to be estimated, i.e., surface gradients as well as the albedo. Researchers in computer vision have attempted to use the shading information to recover the 3D shape. Horn was one of the first researchers to study this problem by casting it as a solution to second order partial differential equations .Shape from shading (SFS) problem is typically solved using four different approaches.

These approaches include the regularization approach, the propagation approach, the local approach and the linear approach. Most of the traditional SFS algorithms assume that the surface has constant albedo values, but the photometric stereo (PS) does not. Some of the recent approaches to PS include a neural network based method for a rotational object with a non uniform reflectance factor, and integrating the SFS with the PS in order to improve the performance of shape recovery.

The general approaches for image restoration include both stochastic and deterministic methods. For a comprehensive survey of various digital image restoration techniques the reader is referred to. A plethora of methods have also been proposed to solve the problem of blind image deconvolution Recently used local spectral inversion of a linearized total variation model for denoising and deblurring.

As discussed above, the researchers have treated the shape estimation and restoration problems separately. Also, for shape estimation using the shading cue, the blur introduced by the camera is never considered. We demonstrate in this paper that both the shape estimation and restoration problems can be handled jointly in a unified framework.


The blur caused by sensor motion is a serious problem in a large number of applications from remote sensing to landmine detection to amateur photography. In general, this problem occurs if the time needed to capture an image is so long that the imaging system moves relative to the scene.

An example application in landmine detection is the general survey of minefields in the aftermath of military conflicts using visible light or infrared cameras. The cameras attached to airplanes and helicopters are blurred by the forward motion of the aircraft and vibrations. While the vibrations can be dumped to some extent using gyroscope stabilizers, there is no simple way to do the same with the forward movement.

A similar problem arises in the case of cameras attached to moving vehicles. For example, thermal infrared cameras attached to armoured vehicles can be used to detect anti-personnel and anti-tank mines on roads and tracks. Similarly, when taking photographs under low light conditions, the camera needs a long exposure time to gather enough light to form the image, which leads to objectionable blur.

To mitigate this problem, producers of digital cameras introduced two types of hardware solutions. The technically simpler one is to increase the sensitivity of a camera (ISO) by amplifying the signal from the sensor, which permits faster shutter speed. Unfortunately, especially in the case of compacts, this results in a decrease of image quality because of more noise. Optical image stabilization (OIS) systems, containing either a moving image sensor or an optical element to counteract camera motion, are technologically more demanding. They help to remove blur without increasing noise level but at the expense of higher cost, weight and energy consumption.

A system removing the blur in software would be an elegant solution to the problem. In this chapter we give an overview to possible approaches to this problem. The algorithms are explained in connection with photography but the results can be applied to other cases such as aerial reconnaissance and infrared imaging as well. We start with an outline of approaches. Then, we describe a mathematical model of blurring. For each approach we summarize its strong and weak points and present a typical state-of-the-art method. Section 8 summarizes results and indicates the potential of individual approaches.


Optical image stabilization (OIS) systems, containing either a moving image sensor or an optical element to counteract camera motion, are technologically more demanding. They help to remove blur without increasing noise level but at the expense of higher cost, weight and energy consumption.

Blurring due to object or camera motion during image capture can cause substantial degradation in image quality. As a result, a great deal of research has been conducted on developing methods for restoring motion blurred images.These methods make certain assumptions on the blurring process, the ideal image, and the noise. Various image processing techniques are then used to identify the blur and restore the image.

However, due to the lack of sufficient knowledge of the blurring process and the ideal image, the developed image blur restoration methods have limited applicability and their computational burden can be quite substantial. Recent advances in CMOS image sensor technology enable digital high speed capture up to thousands of frames per second.

This benefits traditional high speed imaging applications and enables new imaging enhancement capabilities such as multiple capture for increasing the sensor dynamic range. In this scheme, multiple images are captured at different times within the normal exposure time. Shorter exposure time images capture brighter areas of the scene, while longer exposure time images capture darker areas of the scene.

The images are then combined into a single high dynamic range image. In this paper we propose to use this multiple capture capability to simultaneously form a high dynamic range image and reduce or eliminate motion blur. Our algorithm operates completely locally each pixel’s final value is computed using only its captured values. Moreover, our method can operate recursively, requiring the storage of only a constant number of values per pixel independent of the number of images captured.