22-06-2012, 05:57 PM
FACE recognition using neural network.
33FACE-RECOGNITION-USING-NEURAL-NETWORKS-SONA-COLLEGE-.pdf (Size: 468.07 KB / Downloads: 34)
INTRODUCTION
The paper presents a hybrid neural network solution, which compares favorably with
other methods and recognizes a person within a large database of faces. These neural
systems typically return a list of most likely people in the database. Often only one image is
available per person.
First a database is created, which contains images of various persons. In the next
stage, the available images are trained and stored in the database. Finally it classifies the
authorized person’s face, which is used in security monitoring system. Faces represent
complex, multidimensional, meaningful visual stimuli and developing a computational model
for face recognition is difficult.
DETECTION
When the system is attached to a video surveillance system, the Recognition software
searches the field of view of a video camera for faces. Once the face is in view, it is detected
within a fraction of a second. A multi-scale algorithm, which is a program that provides a set
of instructions to accomplish a specific task, is used to search for faces in low resolution. .
The system switches to a high-resolution search only after a head-like shape is detected.
2. ALIGNMENT
Once a face is detected, the head's position, size and pose is the first thing that is
determined. A face needs to be turned at least 35 degrees toward the camera for the system to
register it.
3. NORMALIZATION
The image of the head is scaled and rotated so that it can be Registered and mapped
into an appropriate size and pose. Normalization is performed irrespective of the head's
location and distance from the camera. Light does not have any impact on the normalization
process.
4. REPRESENTATION
Translation of facial data into unique code is done by the system. This Coding
process supports easier comparison of the newly acquired facial data to stored facial data.
5. MATCHING
The newly acquired facial data is compared to the stored data and (ideally) linked to
at least one stored facial representation. Briefly, the use of local image sampling and a
technique for partial lighting invariance, a self-organizing map (SOM) for projection of the
image sample representation into a quantized lower dimensional space, the Karhunen Loève
(KL) transform for comparison with the self-organizing map, a convolutional network (CN)
for partial translation and deformation invariance, and a multi-layer perceptron (MLP) for
comparison with the convolutional network is explored.