We can identify three successive phases in the process of creating the iris code . They are determined respectively as: segmentation, normalization, and features encoding as shown in Fig. 2.
Separation of the iris from the whole eye area is realized during the segmentation phase. At this stage it is crucial to determine the position of the upper and lower eyelids, as well as the exclusion of areas covered by the lashes. In addition, an attention should be paid to the elimination of regions caused by the light reflections from the cornea of the eye.
The first technique of iris location was proposed by the precursor in the field of the iris recognition i.e. by Daugman . This technique uses the so-called integro-differential operator, which acts directly on the image of the iris, seeking the maximum normalized standard circle along the path, a partial derivative of the blurred image relating to the increase of the circle radius. The current operator behaves like a circular edge detector in the picture, acting in the three-dimensional parameter space (x, y, r), i.e. the center of the coordinates and the radius of the circle are looked for, which determine the edge of the iris. The algorithm detects first the outer edge of the iris, and then, limited to the area of the detected iris, it is looking for its inside edge. Using the same operator, but by changing the contour of the arc path, we can also look for the edges of the eyelids, which may in part overlap the photographed iris.
Another technique was proposed by Wildes . In this case also the best fit circle is looked for but the difference (comparing to the Daugman method) consists in a way of searching the parameter space. Iris localization process takes in this case place in two stages. First, the image edge map is created then each detected edge point gives a vote to the respective values in the parameter space looking for the pattern. The edge map is created based on the gradient method. It relies on the assignment of a scalar bitmap vector field, defining the direction and strength increase in the pixel brightness. Then, the highest points of the gradient, which determine the edges, are left with an appropriately chosen threshold. The voting process is performed at the designated edge map using the Hough transform.
In our experimental program  we also used the Hough transform, and to designate the edge map we used the modified Kovesi algorithm  based on the Canny edge detector. An illustration of the segmentation process with the execution time analysis is presented in Fig. 3.
The main aim of the normalization step is the transformation of the localized iris to a defined format in order to allow comparisons with other iris codes. This operation requires consideration of the specific characteristics of the iris like a variable pupil opening, non coordinated pupil, and the iris center points. A possibility of circulation of the iris by tilting the head or as a result of the eye movement in an orbit, should be noticed.
Having successfully located the image area occupied by the iris, the normalization process has to ensure that the same areas at different iris images are represented in the same scale in the same place of the created code. Only with equal representations the comparing two iris codes can be correctly justified. Daugman suggested a standard transformation from Cartesian coordinates to the ring in this phase. This transformation eliminates the problem of the non-central position of the pupil relatively to the iris as well as the pupil opening variations with different lighting conditions. For further processing, points contained in the vicinity of a 90 and 270° (i.e., at the top and at the bottom of the iris) can be omitted, This reduces errors caused by the presence of the eyelids and eyelashes in the iris area.
Poursaberi  proposed to normalize just only half of the iris (close to the pupil), thus by passing the problem of the eyelids and eyelashes. Pereira  showed, in the experiment, in which the iris region was divided into ten rings of equal width, that the potentially better decision can be made with only a part of the rings, namely those numbered as 2, 3, 4, 5, 7, with the ring numbered as the first one, being the closest to the pupil.
During our tests, the Daugman proposal and the model based on its implementation by Libor Masek in Matlab  was used in the step normalization stage. At the same time we can select an area of the iris, which is subject to normalization, using both angular distribution and the distribution along the radius. The division consists in determining the angular range of orientation, which is made with the normalization of the iris. This range is defined in two intervals: the first includes angles from − 90 to 90° and the second—the angles from 90 to − 90° (i.e., angles opposite to clockwise).
The last stage of the feature extraction, which encode the characteristics, aims to extract the normalized iris distinctive features of the individual and to transform them into a binary code. In order to extract individual characteristics of the normalized iris, various types of filtering can be applied. Daugman coded each point of the iris with two bits using two-dimensional Gabor filters and quadrature quantization.
Field suggested using a variety logarithmic Gabor filters, the so called Log-Gabor filters . These filters have certain advantages over and above conventional Gabor filters, namely, by definition, they do not possess a DC component, which may occur in the real part of the Gabor filters. Another advantage of the logarithmic variation is that it exposes high frequencies over low-frequencies. This mechanism approaches the nature of these filters to a typical frequency distribution in real images. Due to this feature the logarithmic Gabor filters better expose information contained in the image.