11-08-2012, 05:11 PM
Ear Recognition Using Wavelets
Ear Recognition Using Wavelets.pdf (Size: 98.8 KB / Downloads: 47)
Introduction
Biometrics is the science in which an entity is
distinguished on the basis of physiological features or
behavioural characteristics [1]. Physiological
characteristics include finger print, iris scan, retina
scan, face, thermo grams of face, palm print, ear etc.
whereas behavioural characteristics consist of gait
recognition, odour, voice recognition and signature
verification. The results are obtained in biometrics by
using single or multiple means. The achieved results
indicate that biometric techniques are much more
precise and accurate than the traditional techniques.
Other than precision, there have always been certain
problems which remain associated with the existing
traditional techniques. As an example consider
possession and knowledge. Both can be shared,
stolen, forgotten, duplicated, misplaced or taken
away. However the danger is minimized in case of
biometric means [2].
The role of biometrics is amenable in all types of
security systems. With the threats/advances of
technologies, there is always need to search new
means for using as stand-alone applications or in
conjunction with the existing systems. In order to
include any new class of biometric, the condition
required is that it should be universal, distinct,
everlasting and collectable i.e. all individuals must
have those features (universal) and these features
should be identifiable for each individual (distinct).
The features should not vary (everlasting) and it must
be easy to get required information from these
features (collectable) [3]. It is obvious that ears are a
prominent feature of all persons making it universally
acceptable. Ear biometrics has several advantages
over complete face: reduced spatial resolution, a more
uniform distribution of colours and less variability
with expressions and orientation of the face.
In the present paper, a new ear recognition approach
using wavelets is applied for human identification.
The remainder of the paper is organized as follows. In
section 2 background and related work with respect to
ear recognition is given. Section 3 includes
preprocessing followed by feature extraction and
matching in section 4. In section 5 experimental
results and discussion are reported and in final section
6 conclusions are made.
Background and related
work
Ear was first used for recognition of human being by
Iannarelli [4] who used manual techniques to identify
ear images. Samples of over 10,000 ears were studied
to prove the distinctiveness of ears. Structure of ear
does not change radically over time. The medical
literature [4] provides information that ear growth is
proportional after first four months of birth and
changes are not noticeable in the age 8 to 70. Victor et
al. [5] and Chang et al. [6] used eigen ear for
identification. The results obtained were different in
both cases. Chang’s results show no difference in ear
and face performance while Victor’s results show that
ear performance is worse than face. According to
Chang views, the difference in result might be due to
usage of different image quality. Moreno et al. [2]
used 2D intensity images of ears with three neural net
approaches (Borda, Bayesian, Weighted Bayesian
combination) for recognition. In his work, 6 images
from 28 people were used to evaluate the recognition
M. Ali, M. Y. Javed, A. Basit, ‘Ear Recognition Using Wavelets’, Proceedings of Image
and Vision Computing New Zealand 2007, pp. 83–86, Hamilton, New Zealand, December
2007.
83
rate of about 93%. Chen et al. [7] studied two steps
iterative closest point algorithm on 30 people with
their 3D ear images that were manually extracted. The
results reveal 2 incorrect matching out of 60 images.
The methology adopted for ear recognition in this
paper is explained by Figure 1. Main blocks are
preprocessing, feature extraction, training and
matching. The details are given in the coming
sections.
Preprocessing
Images with ear rings, other artifacts and occluded
with hairs have not been processed in this research
work. Each image is gone through the following steps
before feature extraction.
• Ear image is cropped manually from the
complete head image of a person.
• Cropped ear image is resized.
• Coloured image is converted to grayscale
image.
Manual cropping has been done in the work because
automated ear cropping is under process. The sizes of
cropped ear image are different. In order to find same
number of features from each ear image, resizing the
images to unique fixed size of 64*64 pixels is made.
Each image was converted from RGB to grayscale (if
not in grayscale). Then it was sent to feature
extraction module. Figure 2 demonstrates the output
at the end of preprocessing step. Figure 2(a) shows
the actual image in the database and cropped image is
visible in Figure 2(b). Figure 2 © and Figure 2 (d)
are the resized cropped images with RGB and
grayscale respectively.
Feature Extraction and Matching
After normalizing the ear images, next step is feature
extraction. A new technique is implemented for
feature extraction using Haar wavelet transform [9].
Haar wavelet of level two is applied to the image and
approximation coefficients of second level are stored
in a row vector. This vector is of size 256 bytes,
which is the desired feature of the processed ear
image.