Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Genetic AND Evolutionary Feature Extraction for Periocular-Based Biometric Recognit
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Genetic AND Evolutionary Feature Extraction for Periocular-Based Biometric Recognition

[attachment=31135]


INTRODUCTION

Biometrics research is the area of study devoted to the
development of automated means for identifying individuals
based on human biological and/or behavioral characteristics
[7]. To date, a number of biometric modalities have
been investigated and have been shown to be successful in
identifying individuals. These modalities include: fingerprint
[10], face [18], iris [1, 8], voice [6], and gait [17] just to
name a few. One of the newest biometric modalities is the
periocular region [11].
Regardless of the underlying modality being used for identification,
a biometric identification system is typically composed
of four modules [7]: a sensor module, a feature extraction
module, a matching/decision-making module, and
a database module. The sensor module is used to capture/
record an image of an individual based on the biometric
being used, the feature extraction module is then used to
extract a set of features to be used as a template for identifying
individuals. The template of a newly captured image is
compared with a database (of the database module) of templates
associated with previously enrolled individuals, and
the matching/decision-making module is used to determine
whether the template of the newly captured image matches
any of the templates belonging to known individuals within
the database module.



BACKGROUND
Periocular Feature Extraction and Recognition

Local Binary Patterns (LBP), introduced by Ojala et al.
in [12], was chosen as a method for extracting features from
periocular images. LBP quantifies texture patterns in local
subdivisions of an image, called patches. The following is
a brief description of how the LBP histogram is calculated.
LBP textures are computed for each pixel in a patch. The
texture at a pixel is derived from the LBP function, LBPP,R,
which takes as input P pixel locations along a circle of radius
R around a center pixel. Intensity values I of pixel locations
along the circle that do not fall in the center of a pixel in the
image are calculated by interpolation. These pixels can be
represented as a set T of the sign of the difference between
the intensity of P and the intensity of the center pixel as in,



EXPERIMENTS

The dataset used in this paper was taken from the FRGC
dataset [15] and is composed of 1,230 images of 410 subjects.
These images were divided into two sets: a probe set of 410
images (one of each subject) and a gallery set of 820 images
(an additional two images of each of the 410 subjects).
Based on the probe and gallery sets three experiments were
constructed to test the recognition accuracy of GEFEssga.
In the first experiment (FRGCl), the recognition accuracy
of the periocular region of the left eye was tested and in
the second experiment (FRGCr), the recognition accuracy
of the periocular region of the right eye was tested. For
the third experiment (FRGCboth), the recognition accuracy
using both eyes were tested. As mentioned earlier each periocular
region is represented as a set of 1416 features. For the
first two experiments, the original feature set consisted 1416
features. For the third experiment, the original feature set
consisted of 2832 features (1416 features for each eye). For
each of these experiments, the objective of GEFEssga was
to evolve FMs that: (a) minimize the number of features
needed for periocular biometric recognition and (b) improve
recognition accuracy.



RESULTS
For all three experiments, GEFEssga evolved a population
20 FMs. The standard deviation of the Gaussian mutation
operator was set to 0.2. For each experiment, the GEFEssga
was run a total of 30 times and was allowed a total of 1000
function evaluations on each run. Table 1 shows the results
of three experiments. The first column in Table 1 denotes
the experiment, the second column denotes the type of feature
set (original or GEFEssga optimized), and the third
column represents the number of features used on average
along with the size of the original feature set. The final
column represents the recognition accuracy (Rank 1 recognition
rate [11]).