06-10-2016, 10:11 AM
Recognition of Human Iris Patterns for Biometric Identificationú¿¦t-ñ-¦¦=ú¬.pdf (Size: 2.18 MB / Downloads: 7)
Abstract
A biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. Iris recognition is regarded as the most reliable and accurate biometric identification system available. Most commercial iris recognition systems use patented algorithms developed by Daugman, and these algorithms are able to produce perfect recognition rates. However, published results have usually been produced under favourable conditions, and there have been no independent trials of the technology.
The work presented in this thesis involved developing an ‘open-source’ iris recognition system in order to verify both the uniqueness of the human iris and also its performance as a biometric. For determining the recognition performance of the system two databases of digitised greyscale eye images were used.
The iris recognition system consists of an automatic segmentation system that is based on the Hough transform, and is able to localise the circular iris and pupil region, occluding eyelids and eyelashes, and reflections. The extracted iris region was then normalised into a rectangular block with constant dimensions to account for imaging inconsistencies. Finally, the phase data from 1D Log-Gabor filters was extracted and quantised to four levels to encode the unique pattern of the iris into a bit-wise biometric template.
The Hamming distance was employed for classification of iris templates, and two templates were found to match if a test of statistical independence was failed. The system performed with perfect recognition on a set of 75 eye images; however, tests on another set of 624 images resulted in false accept and false reject rates of 0.005% and 0.238% respectively. Therefore, iris recognition is shown to be a reliable and accurate biometric technology.
Introduction
1.1 Biometric Technology
A biometric system provides automatic recognition of an individual based on some sort of unique feature or characteristic possessed by the individual. Biometric systems have been developed based on fingerprints, facial features, voice, hand geometry, handwriting, the retina [1], and the one presented in this thesis, the iris.
Biometric systems work by first capturing a sample of the feature, such as recording a digital sound signal for voice recognition, or taking a digital colour image for face recognition. The sample is then transformed using some sort of mathematical function into a biometric template. The biometric template will provide a normalised, efficient and highly discriminating representation of the feature, which can then be objectively compared with other templates in order to determine identity. Most biometric systems allow two modes of operation. An enrolment mode for adding templates to a database, and an identification mode, where a template is created for an individual and then a match is searched for in the database of pre-enrolled templates.
A good biometric is characterised by use of a feature that is; highly unique – so that the chance of any two people having the same characteristic will be minimal, stable – so that the feature does not change over time, and be easily captured – in order to provide convenience to the user, and prevent misrepresentation of the feature.
1.2 The Human Iris
The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. A front-on view of the iris is shown in Figure 1.1. The iris is perforated close to its centre by a circular aperture known as the pupil. The function of the iris is to control the amount of light entering through the pupil, and this is done by the sphincter and the dilator muscles, which adjust the size of the pupil. The average diameter of the iris is 12 mm, and the pupil size can vary from 10% to 80% of the iris diameter [2].
The iris consists of a number of layers, the lowest is the epithelium layer, which contains dense pigmentation cells. The stromal layer lies above the epithelium layer, and contains blood vessels, pigment cells and the two iris muscles. The density of stromal pigmentation determines the colour of the iris. The externally visible surface of the multi-layered iris contains two zones, which often differ in colour [3]. An outer ciliary zone and an inner pupillary zone, and these two zones are divided by the collarette – which appears as a zigzag pattern.
Formation of the iris begins during the third month of embryonic life [3]. The unique pattern on the surface of the iris is formed during the first year of life, and pigmentation of the stroma takes place for the first few years. Formation of the unique patterns of the iris is random and not related to any genetic factors [4]. The only characteristic that is dependent on genetics is the pigmentation of the iris, which determines its colour. Due to the epigenetic nature of iris patterns, the two eyes of an individual contain completely independent iris patterns, and identical twins possess uncorrelated iris patterns. For further details on the anatomy of the human eye consult the book by Wolff [3].
1.3 Iris Recognition
The iris is an externally visible, yet protected organ whose unique epigenetic pattern remains stable throughout adult life. These characteristics make it very attractive for use as a biometric for identifying individuals. Image processing techniques can be employed to extract the unique iris pattern from a digitised image of the eye, and encode it into a biometric template, which can be stored in a database. This biometric template contains an objective mathematical representation of the unique information stored in the iris, and allows comparisons to be made between templates. When a subject wishes to be identified by an iris recognition system, their eye is first photographed, and then a template created for their iris region. This template is then compared with the other templates stored in a database until either a matching template is found and the subject is identified, or no match is found and the subject remains unidentified.
Although prototype systems had been proposed earlier, it was not until the early nineties that Cambridge researcher, John Daugman, implemented a working automated iris recognition system [1][2]. The Daugman system is patented [5] and the rights are now owned by the company Iridian Technologies. Even though the Daugman system is the most successful and most well known, many other systems have been developed. The most notable include the systems of Wildes et al.
Boles and Boashash [8], Lim et al. [9], and Noh et al. [10]. The algorithms by Lim et al. are used in the iris recognition system developed by the Evermedia and Senex companies. Also, the Noh et al. algorithm is used in the ‘IRIS2000’ system, sold by IriTech. These are, apart from the Daugman system, the only other known commercial implementations.
The Daugman system has been tested under numerous studies, all reporting a zero failure rate. The Daugman system is claimed to be able to perfectly identify an individual, given millions of possibilities. The prototype system by Wildes et al. also reports flawless performance with 520 iris images [7], and the Lim et al. system attains a recognition rate of 98.4% with a database of around 6,000 eye images.
Compared with other biometric technologies, such as face, speech and finger recognition, iris recognition can easily be considered as the most reliable form of biometric technology [1]. However, there have been no independent trials of the technology, and source code for systems is not available. Also, there is a lack of publicly available datasets for testing and research, and the test results published have usually been produced using carefully imaged irises under favourable conditions.
1.4 Objective
The objective will be to implement an open-source iris recognition system in order to verify the claimed performance of the technology. The development tool used will be MATLAB®, and emphasis will be only on the software for performing recognition, and not hardware for capturing an eye image. A rapid application development (RAD) approach will be employed in order to produce results quickly. MATLAB® provides an excellent RAD environment, with its image processing toolbox, and high level programming methodology. To test the system, two data sets of eye images will be used as inputs; a database of 756 greyscale eye images courtesy of The Chinese Academy of Sciences – Institute of Automation (CASIA) [13], and a database of 120 digital greyscale images courtesy of the Lion’s Eye Institute (LEI) [14].
The system is to be composed of a number of sub-systems, which correspond to each stage of iris recognition. These stages are segmentation – locating the iris region in an eye image, normalisation – creating a dimensionally consistent representation of the iris region, and feature encoding – creating a template containing only the most discriminating features of the iris. The input to the system will be an eye image, and the output will be an iris template, which will provide a mathematical representation of the iris region. For an overview of the components of the system see Appendix B.
Segmentation
2.1 Overview
The first stage of iris recognition is to isolate the actual iris region in a digital eye image. The iris region, shown in Figure 1.1, can be approximated by two circles, one for the iris/sclera boundary and another, interior to the first, for the iris/pupil boundary. The eyelids and eyelashes normally occlude the upper and lower parts of the iris region. Also, specular reflections can occur within the iris region corrupting the iris pattern. A technique is required to isolate and exclude these artefacts as well as locating the circular iris region.
The success of segmentation depends on the imaging quality of eye images. Images in the CASIA iris database [13] do not contain specular reflections due to the use of near infra-red light for illumination. However, the images in the LEI database [14] contain these specular reflections, which are caused by imaging under natural light. Also, persons with darkly pigmented irises will present very low contrast between the pupil and iris region if imaged under natural light, making segmentation more difficult. The segmentation stage is critical to the success of an iris recognition system, since data that is falsely represented as iris pattern data will corrupt the biometric templates generated, resulting in poor recognition rates.
2.2 Literature Review
2.2.1 Hough Transform
The Hough transform is a standard computer vision algorithm that can be used to determine the parameters of simple geometric objects, such as lines and circles, present in an image. The circular Hough transform can be employed to deduce the radius and centre coordinates of the pupil and iris regions. An automatic segmentation algorithm based on the circular Hough transform is employed by Wildes et al. [7], Kong and Zhang [15], Tisse et al. [12], and Ma et al. [16]. Firstly, an edge map is generated by calculating the first derivatives of intensity values in an eye image and then thresholding the result. From the edge map, votes are cast in Hough space for the parameters of circles passing through each edge point. These parameters are the centre coordinates xc and yc, and the radius r, which are able to define any circle according to the equation
Implementation
It was decided to use circular Hough transform for detecting the iris and pupil boundaries. This involves first employing Canny edge detection to generate an edge map. Gradients were biased in the vertical direction for the outer iris/sclera boundary, as suggested by Wildes et al. [4]. Vertical and horizontal gradients were weighted equally for the inner iris/pupil boundary. A modified version of Kovesi’s Canny edge detection MATLAB® function [22] was implemented, which allowed for weighting of the gradients.
The range of radius values to search for was set manually, depending on the database used. For the CASIA database, values of the iris radius range from 90 to 150 pixels, while the pupil radius ranges from 28 to 75 pixels. In order to make the circle detection process more efficient and accurate, the Hough transform for the iris/sclera boundary was performed first, then the Hough transform for the iris/pupil boundary was performed within the iris region, instead of the whole eye region, since the pupil is always within the iris region. After this process was complete, six parameters are stored, the radius, and x and y centre coordinates for both circles.
Eyelids were isolated by first fitting a line to the upper and lower eyelid using the linear Hough transform. A second horizontal line is then drawn, which intersects with the first line at the iris edge that is closest to the pupil. This process is illustrated in Figure 2.2 and is done for both the top and bottom eyelids. The second horizontal line allows maximum isolation of eyelid regions. Canny edge detection is used to create an edge map, and only horizontal gradient information is taken. The linear Hough transform is implemented using the MATLAB® Radon transform, which is a form of the Hough transform. If the maximum in Hough space is lower than a set threshold, then no line is fitted, since this corresponds to non-occluding eyelids. Also, the lines are restricted to lie exterior to the pupil region, and interior to the iris region. A linear Hough transform has the advantage over its parabolic version, in that there are less parameters to deduce, making the process less computationally demanding.
For isolating eyelashes in the CASIA database a simple thresholding technique was used, since analysis reveals that eyelashes are quite dark when compared with the rest of the eye image. Analysis of the LEI eye images shows that thresholding to detect eyelashes would not be successful. Although, the eyelashes are quite dark compared with the surrounding eyelid region, areas of the iris region are equally dark due to the imaging conditions. Therefore thresholding to isolate eyelashes would also remove important iris region features, making this technique infeasible. However, eyelash occlusion is not very prominent so no technique was implemented to isolate eyelashes in the LEI database.
The LEI database also required isolation of specular reflections. This was implemented, again, using thresholding, since reflection areas are characterised by high pixel values close to 255. For the eyelid, eyelash, and reflection detection process, the coordinates of any of these noise areas are marked using the MATLAB® NaN type, so that intensity values at these points are not misrepresented as iris region data.
2.4 Results
The automatic segmentation model proved to be successful. The CASIA database provided good segmentation, since those eye images had been taken specifically for iris recognition research and boundaries of iris pupil and sclera were clearly distinguished. For the CASIA database, the segmentation technique managed to correctly segment the iris region from 624 out of 756 eye images, which corresponds to a success rate of around 83%. The LEI images proved problematic and the segmentation process correctly identified iris and pupil boundaries for only 75 out of 120 eye images, which corresponds to a success rate of around 62%.
The problem images had small intensity differences between the iris region and the pupil region as shown in Figure 2.3. One problem faced with the implementation was
8
that it required different parameters to be set for each database. These parameters were the radius of iris and pupil to search for, and threshold values for creating edge maps. However, for installations of iris recognition systems, these parameters would only need to be set once, since the camera hardware, imaging distance, and lighting conditions would usually remain the same.
Overview
Once the iris region is successfully segmented from an eye image, the next stage is to transform the iris region so that it has fixed dimensions in order to allow comparisons. The dimensional inconsistencies between eye images are mainly due to the stretching of the iris caused by pupil dilation from varying levels of illumination. Other sources of inconsistency include, varying imaging distance, rotation of the camera, head tilt, and rotation of the eye within the eye socket. The normalisation process will produce iris regions, which have the same constant dimensions, so that two photographs of the same iris under different conditions will have characteristic features at the same spatial location.
Another point of note is that the pupil region is not always concentric within the iris region, and is usually slightly nasal [2]. This must be taken into account if trying to normalise the ‘doughnut’ shaped iris region to have constant radius.