24-08-2012, 04:47 PM
Biometric Authentication System on Mobile Personal Devices
ABSTRACT
Both theoretical and practical considerations
are taken. The final system is able to achieve an equal
error rate of 2% under challenging testing protocols. The low
hardware and software cost makes the system well adaptable to
a large range of security applications.
INTRODUCTION
In this paper, we study the biometric authentication problem
on a personal mobile device (MPD) in the context of secure
communication in a personal network (PN). A PN is a user
centric ambient communication environment [19] for unlimited
communication between the user and the personal electronic
devices. An illustration of the PN is shown in Fig. 1. The
biometric authentication system is envisaged as a secure link
between the user and the user’s PN, providing secure access
of the user to the PN. Such an application puts forward the
following three requirements of the biometric authentication
system: 1) security; 2) convenience; and 3) complexity.
FACE DETECTION
Face detection is the initial step for face authentication.
Although the detection of the face is an easy visual task for
a human, it is a complicated problem for computer vision due
to the fact that face is a dynamic object, subject to a high
degree of variability, originating from both external and internal
changes. An extensive literature exists on automatic face
detection [24], [52].
The known face detection methods can be categorized into
the following two large groups: 1) heuristic-based methods and
2) classification-based methods. Examples of the first category
include skin color methods and facial geometry methods [21],
[25], [33], [38]. The heuristic methods are often simple to
implement but are not reliable as the heuristics are often vulnerable
to exterior changes. In comparison, classification-based
methods are able to deal with much more complex scenarios,
because they treat face detection as a pattern classification
problem and, thus, benefit largely from the existing pattern
classification resources. The biggest disadvantage of the
classification-based methods, however, is their high computational
load as the patterns to be classified must cover the
exhaustive set of image patches at any location and scale of
the input image, as shown in Fig. 3.
FACE REGISTRATION
Generally speaking, the location of the detected face is not
precise enough for further analysis of the face content. It has
been emphasized in the literature that face registration is an
essential step after face detection for subsequent face interpretation
task [3], [4], [36]. The following two popular ways of doing
the face registration can be found in the literature: 1) holistic
methods and 2) local methods.
Holistic registration methods take advantage of both global
face texture information and the local facial feature information
(i.e., locations of eyes, nose, mouth, etc.). Examples are
the active shape model (ASM) [10], active appearance model
(AAM) [9], and their variants. Fitting such models onto the
input face image is often formulated as an iterative optimization
problem. As common to such complex optimization problems,
however, two potential drawbacks of the holistic registration
are the possibility to be trapped into local minimal and the
relatively high computation load.
ILLUMINATION NORMALIZATION
The variability on the face images brought by illumination
changes is one of the biggest obstacles for face verification.
It has been suggested that the variability caused by illumination
changes easily exceeds the variability caused by identity
changes [34]. Illumination normalization, therefore, is a
very important preprocessing step before verification. There
has been an intensive study on this topic in the literature,
which can be categorized into two different methodologies.
The first category tries to study the illumination problem in a
fundamental way by building up the physics imaging model
and restoring the 3-D face surface model, either in an explicit
or an implicit way. We call this category the 3-D methods,
which include the linear subspace method [40], illumination
cone [2], spherical harmonics [1], quotient image [41], etc.
The second category, however, does not rely on recovering the
full 3-D information, instead, they work directly on the 2-D
image pixel values. We call this category the 2-D methods,
which include histogram equalization [27], linear and homographic
filters [22], the Retinex method [29], the diffusion
method [7], etc.
Test Protocol
For one specific user, we learn the density of the user class
from the user data of one session and compute the testing
genuine scores, i.e., the log likelihood ratios, from the user
data of the other three independent sessions. The density of the
background class is learned from the public database, and the
impostor scores are computed from our collected data of
the other 19 subjects. As a result, for the user, we have around
1800 images for training and 5400 for validation. On the
impostor side, we have around 10 000 public database faces for
training and 100 000 collected faces of other subjects for validation.
Note that the training and testing data of the background
class are independent in the setting. Using the public databases
as the training data is convenient for the MPD implementation,
as the background parameters need to be calculated only once
and stored for all the users.
CONCLUSION
Face verification on the MPD provides a secure link between
the user and the PN. In this paper, we have presented a biometric
authentication system, from face detection, face registration,
illumination normalization, face verification, to information
fusion. Both theoretical concerns and practical concerns are
given.
The series of solutions to the five modules proves to be
efficient and robust. In addition, the different modules collaborate
with each other in a systematic manner. For example,
downscaling the face in the detection module provides very fast
localization of the face, at the cost of coarser scales, but the subsequent
registration module immediately compensates for the
accuracy. The same is true for the illumination normalization
and verification modules, where the latter well accommodates
the former.
ABSTRACT
Both theoretical and practical considerations
are taken. The final system is able to achieve an equal
error rate of 2% under challenging testing protocols. The low
hardware and software cost makes the system well adaptable to
a large range of security applications.
INTRODUCTION
In this paper, we study the biometric authentication problem
on a personal mobile device (MPD) in the context of secure
communication in a personal network (PN). A PN is a user
centric ambient communication environment [19] for unlimited
communication between the user and the personal electronic
devices. An illustration of the PN is shown in Fig. 1. The
biometric authentication system is envisaged as a secure link
between the user and the user’s PN, providing secure access
of the user to the PN. Such an application puts forward the
following three requirements of the biometric authentication
system: 1) security; 2) convenience; and 3) complexity.
FACE DETECTION
Face detection is the initial step for face authentication.
Although the detection of the face is an easy visual task for
a human, it is a complicated problem for computer vision due
to the fact that face is a dynamic object, subject to a high
degree of variability, originating from both external and internal
changes. An extensive literature exists on automatic face
detection [24], [52].
The known face detection methods can be categorized into
the following two large groups: 1) heuristic-based methods and
2) classification-based methods. Examples of the first category
include skin color methods and facial geometry methods [21],
[25], [33], [38]. The heuristic methods are often simple to
implement but are not reliable as the heuristics are often vulnerable
to exterior changes. In comparison, classification-based
methods are able to deal with much more complex scenarios,
because they treat face detection as a pattern classification
problem and, thus, benefit largely from the existing pattern
classification resources. The biggest disadvantage of the
classification-based methods, however, is their high computational
load as the patterns to be classified must cover the
exhaustive set of image patches at any location and scale of
the input image, as shown in Fig. 3.
FACE REGISTRATION
Generally speaking, the location of the detected face is not
precise enough for further analysis of the face content. It has
been emphasized in the literature that face registration is an
essential step after face detection for subsequent face interpretation
task [3], [4], [36]. The following two popular ways of doing
the face registration can be found in the literature: 1) holistic
methods and 2) local methods.
Holistic registration methods take advantage of both global
face texture information and the local facial feature information
(i.e., locations of eyes, nose, mouth, etc.). Examples are
the active shape model (ASM) [10], active appearance model
(AAM) [9], and their variants. Fitting such models onto the
input face image is often formulated as an iterative optimization
problem. As common to such complex optimization problems,
however, two potential drawbacks of the holistic registration
are the possibility to be trapped into local minimal and the
relatively high computation load.
ILLUMINATION NORMALIZATION
The variability on the face images brought by illumination
changes is one of the biggest obstacles for face verification.
It has been suggested that the variability caused by illumination
changes easily exceeds the variability caused by identity
changes [34]. Illumination normalization, therefore, is a
very important preprocessing step before verification. There
has been an intensive study on this topic in the literature,
which can be categorized into two different methodologies.
The first category tries to study the illumination problem in a
fundamental way by building up the physics imaging model
and restoring the 3-D face surface model, either in an explicit
or an implicit way. We call this category the 3-D methods,
which include the linear subspace method [40], illumination
cone [2], spherical harmonics [1], quotient image [41], etc.
The second category, however, does not rely on recovering the
full 3-D information, instead, they work directly on the 2-D
image pixel values. We call this category the 2-D methods,
which include histogram equalization [27], linear and homographic
filters [22], the Retinex method [29], the diffusion
method [7], etc.
Test Protocol
For one specific user, we learn the density of the user class
from the user data of one session and compute the testing
genuine scores, i.e., the log likelihood ratios, from the user
data of the other three independent sessions. The density of the
background class is learned from the public database, and the
impostor scores are computed from our collected data of
the other 19 subjects. As a result, for the user, we have around
1800 images for training and 5400 for validation. On the
impostor side, we have around 10 000 public database faces for
training and 100 000 collected faces of other subjects for validation.
Note that the training and testing data of the background
class are independent in the setting. Using the public databases
as the training data is convenient for the MPD implementation,
as the background parameters need to be calculated only once
and stored for all the users.
CONCLUSION
Face verification on the MPD provides a secure link between
the user and the PN. In this paper, we have presented a biometric
authentication system, from face detection, face registration,
illumination normalization, face verification, to information
fusion. Both theoretical concerns and practical concerns are
given.
The series of solutions to the five modules proves to be
efficient and robust. In addition, the different modules collaborate
with each other in a systematic manner. For example,
downscaling the face in the detection module provides very fast
localization of the face, at the cost of coarser scales, but the subsequent
registration module immediately compensates for the
accuracy. The same is true for the illumination normalization
and verification modules, where the latter well accommodates
the former.