1 Introduction

Identification techniques based on the iris analysis gained popularity and scientific interest since John Daugman introduced in 1993 the first algorithm for the identification of persons based on the iris of the eye [5]. Then many other researchers presented new solutions in this area [3, 13, 14, 16, 21, 23, 2527].

The iris is an element, which arises already in an early phase of the human life and remains unchanged for a long part of the life. Construction of the iris is independent of the genetic relationships and each person in the world, even the twins possess different irises. However, there are also many problems to be faced when encoding the iris such as a change of the opening angle in the pupil depending on lighting conditions, covering a portion of its regions by the eyelids and eyelashes, or the rotation of the iris due to the inclination of the head or eye movement.

2 Acquisition of iris images and iris databases

Generally acquisition of the iris should be implemented in accordance with the standards. The application interface has to be built using ANSI INCITS 358-2002 (known as BioAPITM v1.1) recommendations. Additionally, the iris image should comply with ISO/IEC 19794-6 norm [1].

One of the first systems for the iris acquisition were developed using concepts proposed by Daugman [5] and Wildes [26]. Daugman’s system performs image of the iris with a diameter of typically between 100 and 200 pixels, taking pictures from a distance of 15–46 cm, using a 330 mm lens. In the case of the Wildes proposal the iris image has a diameter of about 256 pixels and the photo is taken with a distance of 20 cm using a 80 mm lens.

Currently, several iris acquisition devices as well as whole iris recognition systems can be found on the commercial market. Examples of the iris devices are shown in Table 1.

Table 1 Examples of the iris devices [6, 10, 17, 18]

There are systems, which enable detection of the iris of persons who are in motion, too. In 2009, the company Sarnoff [9] presented the first device in the series Iris-On-the-Move, which realizes this assumption. The IOM Passport Portal System allows the detection and identification of thirty people per minute. The system can effectively be used to secure the objects with a large flow of people, such as embassies, airports, or factories. Figure 1 shows the diagram of the IOM system. The system uses a card reader, which is a preliminary step in persons identification. The person is detected by the system of cameras, then the iris is detected and the iris code is determined, which is compared with the pattern. An advantage of this system is its ability to identify people who wear glasses or contact lenses. The system allows to identify people from the distance of three meters.

Fig. 1
figure 1

Diagram of the system: Iris-On-the-Move

Experimental studies were carried out using the databases containing photos of irises prepared by scientific institutions dealing with this issue. Two publicly available databases were used during our experiments, as shown in Section 4. The first database was CASIA [2], coming from the Chinese Academy of Sciences, Institute of Automation, while the second IrisBath [24] was developed at the University of Bath. We have also obtained an access to UBIRIS v.2.0 database [22] and the database prepared by Michael Dobeš and Libor Machala [7].

CASIA database is presented in three versions. All present photographs were taken in the near infrared. We used the first and the third version of this database in our experimental research. Version 1.0 contains 756 iris images with dimensions 320×280 pixels carried out on 108 different eyes. The pictures in CASIA database were taken using the specialized camera and saved in BMP format. For each eye 7 photos were made, 3 in the first session and 4 in the second. Pupil area was uniformly covered with a dark color, thus eliminating the reflections occurring during the acquisition process.

The third version of the CASIA database contains more than 22,000 images from more than 700 different objects. It consists of three sets of data in JPG 8-bit format. Section of CASI-IrisV3-Lamp contains photographs taken at the turned-on and off lamp close to the light source to vary the lighting conditions, while the CASIA-IrisV3 Twins includes images of irises of hundred pairs of twins.

Lately a new version of CASIA database has been created the CASIA-IrisV4. It is an extension of the CASIA-IrisV3 and contains six subsets. Three subsets from CASIA-IrisV3 are: CASIA-Iris-Interval, CASIA-Iris-Lamp, and CASIA-Iris-Twins. Three new subsets are: CASIA-Iris-Distance, CASIA-Iris-Thousand, and CASIA-Iris-Syn.

CASIA-Iris-Distance contains iris images captured using self-developed long-range multi-modal biometric image acquisition and recognition system. The advanced biometric sensor can recognize users from 3 m away. CASIA-Iris-Thousand contains 20,000 iris images from 1,000 subjects. CASIA-Iris-Syn contains 10,000 synthesized iris images of 1,000 classes. The iris textures of these images are synthesized automatically from a subset of CASIA-IrisV1.

CASIA-IrisV4 contains a total of 54,607 iris images from more than 1 800 genuine subjects and 1,000 virtual subjects. All iris images are 8-bit gray-level JPEG files, collected under the near infrared illumination.

The IrisBath database is created by the Signal and Image Processing Group (SIPG) at the University of Bath in the UK [24]. The project aimed to bring together 20 high resolution images from 800 objects. Most of the photos show the irises of students from over one hundred countries, who form a representative group. The photos were performed with the resolution of \(1\text{,}280 \times 960\) pixels in 8-bit BMP, using a system with camera LightWise ISG. There are thousands of free of charge images that have been compressed into the JPEG2000 format with the resolution of 0.5 bit per pixel.

3 Extraction and coding of iris features

We can identify three successive phases in the process of creating the iris code [15]. They are determined respectively as: segmentation, normalization, and features encoding as shown in Fig. 2.

Fig. 2
figure 2

Stages of creation of iris codes

3.1 Segmentation process

Separation of the iris from the whole eye area is realized during the segmentation phase. At this stage it is crucial to determine the position of the upper and lower eyelids, as well as the exclusion of areas covered by the lashes. In addition, an attention should be paid to the elimination of regions caused by the light reflections from the cornea of the eye.

The first technique of iris location was proposed by the precursor in the field of the iris recognition i.e. by Daugman [5]. This technique uses the so-called integro-differential operator, which acts directly on the image of the iris, seeking the maximum normalized standard circle along the path, a partial derivative of the blurred image relating to the increase of the circle radius. The current operator behaves like a circular edge detector in the picture, acting in the three-dimensional parameter space (x, y, r), i.e. the center of the coordinates and the radius of the circle are looked for, which determine the edge of the iris. The algorithm detects first the outer edge of the iris, and then, limited to the area of the detected iris, it is looking for its inside edge. Using the same operator, but by changing the contour of the arc path, we can also look for the edges of the eyelids, which may in part overlap the photographed iris.

Another technique was proposed by Wildes [26]. In this case also the best fit circle is looked for but the difference (comparing to the Daugman method) consists in a way of searching the parameter space. Iris localization process takes in this case place in two stages. First, the image edge map is created then each detected edge point gives a vote to the respective values in the parameter space looking for the pattern. The edge map is created based on the gradient method. It relies on the assignment of a scalar bitmap vector field, defining the direction and strength increase in the pixel brightness. Then, the highest points of the gradient, which determine the edges, are left with an appropriately chosen threshold. The voting process is performed at the designated edge map using the Hough transform.

In our experimental program [11] we also used the Hough transform, and to designate the edge map we used the modified Kovesi algorithm [12] based on the Canny edge detector. An illustration of the segmentation process with the execution time analysis is presented in Fig. 3.

Fig. 3
figure 3

Example time-consuming analysis of segmentation process

3.2 Normalization

The main aim of the normalization step is the transformation of the localized iris to a defined format in order to allow comparisons with other iris codes. This operation requires consideration of the specific characteristics of the iris like a variable pupil opening, non coordinated pupil, and the iris center points. A possibility of circulation of the iris by tilting the head or as a result of the eye movement in an orbit, should be noticed.

Having successfully located the image area occupied by the iris, the normalization process has to ensure that the same areas at different iris images are represented in the same scale in the same place of the created code. Only with equal representations the comparing two iris codes can be correctly justified. Daugman suggested a standard transformation from Cartesian coordinates to the ring in this phase. This transformation eliminates the problem of the non-central position of the pupil relatively to the iris as well as the pupil opening variations with different lighting conditions. For further processing, points contained in the vicinity of a 90 and 270° (i.e., at the top and at the bottom of the iris) can be omitted, This reduces errors caused by the presence of the eyelids and eyelashes in the iris area.

Poursaberi [20] proposed to normalize just only half of the iris (close to the pupil), thus by passing the problem of the eyelids and eyelashes. Pereira [19] showed, in the experiment, in which the iris region was divided into ten rings of equal width, that the potentially better decision can be made with only a part of the rings, namely those numbered as 2, 3, 4, 5, 7, with the ring numbered as the first one, being the closest to the pupil.

During our tests, the Daugman proposal and the model based on its implementation by Libor Masek in Matlab [15] was used in the step normalization stage. At the same time we can select an area of the iris, which is subject to normalization, using both angular distribution and the distribution along the radius. The division consists in determining the angular range of orientation, which is made with the normalization of the iris. This range is defined in two intervals: the first includes angles from − 90 to 90° and the second—the angles from 90 to − 90° (i.e., angles opposite to clockwise).

3.3 Features coding

The last stage of the feature extraction, which encode the characteristics, aims to extract the normalized iris distinctive features of the individual and to transform them into a binary code. In order to extract individual characteristics of the normalized iris, various types of filtering can be applied. Daugman coded each point of the iris with two bits using two-dimensional Gabor filters and quadrature quantization.

Field suggested using a variety logarithmic Gabor filters, the so called Log-Gabor filters [8]. These filters have certain advantages over and above conventional Gabor filters, namely, by definition, they do not possess a DC component, which may occur in the real part of the Gabor filters. Another advantage of the logarithmic variation is that it exposes high frequencies over low-frequencies. This mechanism approaches the nature of these filters to a typical frequency distribution in real images. Due to this feature the logarithmic Gabor filters better expose information contained in the image.

4 Time analysis

During our research we used the program IrisCode_TK2007 [11]. A multi-processing was used in order to automatically create iris codes for multiple files. The study involved two databases described in Section 2, namely CASIA and IrisBath.

Test results are presented in Table 2. Section “Information” includes the total number of files and the number of classes of irises. Section “Results” contains the results of the processed images. These are average execution times of individual stages and the total processing time for all files. Figure 4 shows the times of individual stages, expressed in percentage of the overall time for all tested databases (processed with Intel Core i7 CPU; 2,93 GHz).

Table 2 Processing times for two bases: IrisBath and CASIA
Fig. 4
figure 4

Participation of particular stages, expressed in percentage, for analyzed tested databases

Our program contains also an option “Multithreading”, which enables multithreaded processing on multiprocessor machines. Figure 5 presents the comparison of the processing times of various stages, when the option “Multithreading” was used or not (processed on Intel Core i7 CPU; 2.93 GHz) for IrisBath database. The total processing time for one processor was about 17 min. while for two processors was about 9  min.

Fig. 5
figure 5

Participation of individual stages, expressed in percentage, for all tested databases

5 Results of identification with IrisBath database

5.1 Study of the areas of normalization

In this experiment we tested dependancy of the iris identification from different parts of the iris.

First, we examined how particular angular span of the iris influences identification of a person. We define angular span as a range of the iris that was used for normalization. Two semicircles were obtained by dividing the circle describing the iris with a vertical line. In each of the semicircles we define angles of n degrees oriented in opposite directions to receive areas of normalization as shown in Fig. 6a.

Fig. 6
figure 6

Area of normalization

The second part of the experiment included study of the impact that length of the radius of iris has on person identification. A length of the radius of iris defines a segment of the iris that is used for normalization. Such ring does not have to start on the edge of pupil and does not have to end on the outer edge of the iris. Figure 6b illustrates the idea of this approach.

5.1.1 Angular span

For this experiment a symmetric areas of iris were used, with angles ranged from 30 to 180° (with 30° interval). Figure 7 shows results of this experiment and Table 3 shows the obtained EER values.

Fig. 7
figure 7

DET plots illustrating influence of angular span of iris on person identification

Table 3 EER for particular angular span of iris normalization area

It can be observed that the best results of the iris identification where obtained for an angle in range from 120 to 180°, which is the biggest possible value of this parameter. From Fig. 6 it can also be inferred that the increasing angular spanning from 120 to 180° does not give much of improvement. Based on this, we can conclude that the upper and the lower parts of the iris (very near to the outer edge) do not include significant information and are in most cases covered by lids and lashes.

5.1.2 Length of radius

After defining the original radius R of iris as a part from its inter to outer edge, two different experiments can be taken into consideration. First, we tested how the length of the radius that is increasing from the inter to the outer edge influences identification of a person. In the second step, the rings of the iris from its outer part were studied. Results of those experiments are shown in Figs. 8 and 9, respectively. Table 4 contains EERs for these experiments. The tests were performed for the angular span equal to 180°.

Fig. 8
figure 8

DET plots illustrating influence of length of radius of the iris on person identification (radius increases from inside to outside)

Fig. 9
figure 9

DET plots illustrating influence of length of radius of the iris on person identification (radius increases from outside to inside)

Table 4 EER for particular radii of the iris used for iris normalization

From Fig. 8 and Table 4 it can be seen that thicker rings of the iris image used for normalization give better results when increasing the radius from inside to outside, but only up to r = 0.9R. If the whole iris image is taken, the result is worse. It can be inferred that the outer parts (r ∈ [0.9R; R]) of the iris are covered with lids or lashes and that impedes identification.

Furthermore from Fig. 9 it can be deduced that the same length of radius (for r ∈ [0.1R; 0.5R]) gives better results for inner parts of the iris. This leads to the conclusion that the outer parts of iris do not contain the same amount of distinctive information as the inner parts. Another observation is the fact that far inner part of the iris can have negative impact on the person identification based on Fig. 8. Such phenomenon may be caused by vicinity of pupil.

6 Conclusions

The most important issue in the biometric identification process is recognition accuracy. In iris recognition system it depends on acquisition precision and features extraction parameters.

During our study the following results were obtained: FAR = 0.351% (false acceptance rate) and FRR = 0.572% (false rejection rate), which result in the overall factor of the iris verification correctness, equal to 99.5%. For the CASIA database v.1.0 the best result was obtained with the code size of 360 ×40 bits and the following results were obtained: FAR = 3.25%, FRR = 3.03%, and the ratio of correct verification of iris codes at the level of 97% [4].

The best result were received using the IrisBath database by means of the log-Gabor1D filter. We obtained EER = 0.0031% for angular span of iris normalization from 120 to 180°. It can be inferred from our experiment that increasing this parameter over 120° does not improve identification.

Data shown in Table 4 lead to a conclusion that the inner half of the iris area used for normalization contains more distinctive information that the outer half. Another observation is the fact that far inner and far outer parts of iris used for normalization can worsen the identification results because of vicinity of pupil or lids and lashes.

It can be observed that the time of calculations is so short that the proposed iris recognition system can operate in real-time. However, an effective acquisition of the iris image remains a problem.