Skip to main content

Unconstrained Iris Segmentation Using Convolutional Neural Networks

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11367))

Abstract

The extraction of consistent and identifiable features from an image of the human iris is known as iris recognition. Identifying which pixels belong to the iris, known as segmentation, is the first stage of iris recognition. Errors in segmentation propagate to later stages. Current segmentation approaches are tuned to specific environments.

We propose using a convolution neural network for iris segmentation. Our algorithm is accurate when trained on a single environment and tested on multiple environments. Our network builds on the Mask R-CNN framework (He et al. ICCV 2017). Our approach segments faster than previous approaches including the Mask R-CNN network.

Our network is accurate when trained on a single environment and tested with a different sensors (either visible light or near-infrared). Its accuracy degrades when trained with a visible light sensor and tested with a near-infrared sensor (and vice versa). A small amount of retraining of the visible light model (using a few samples from a near-infrared dataset) yields a tuned network accurate in both settings.

For training and testing, this work uses the Casia v4 Interval, Notre Dame 0405, Ubiris v2, and IITD datasets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Abdullah, M.A., Dlay, S.S., Woo, W.L., Chambers, J.A.: Robust iris segmentation method based on a new active contour force with a noncircular normalization. IEEE Trans. Syst. Man Cybern.: Syst. 47(12), 3128–3141 (2017)

    Article  Google Scholar 

  2. Alonso-Fernandez, F., Bigun, J.: Iris boundaries segmentation using the generalized structure tensor. A study on the effects of image degradation. In: 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 426–431. IEEE (2012)

    Google Scholar 

  3. Alonso-Fernandez, F., Bigun, J.: Quality factors affecting iris segmentation and matching. In: 2013 International Conference on Biometrics (ICB), pp. 1–6. IEEE (2013)

    Google Scholar 

  4. Arsalan, M., et al.: Deep learning-based iris segmentation for iris recognition in visible light environment. Symmetry 9(11), 263 (2017)

    Article  Google Scholar 

  5. Arsalan, M., Naqvi, R.A., Kim, D.S., Nguyen, P.H., Owais, M., Park, K.R.: IrisDenseNet: robust iris segmentation using densely connected fully convolutional networks in the images by visible light and near-infrared light camera sensors. Sensors 18(5), 1501 (2018)

    Article  Google Scholar 

  6. Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)

    Article  Google Scholar 

  7. Bazrafkan, S., Thavalengal, S., Corcoran, P.: An end to end deep neural network for iris segmentation in unconstrained scenarios. Neural Netw. 106, 79–95 (2018)

    Article  Google Scholar 

  8. Bowyer, K.W., Flynn, P.J.: The ND-IRIS-0405 iris image dataset. CoRR abs/1606.04853 (2009)

    Google Scholar 

  9. Daugman, J.: How iris recognition works. In: The Essential Guide to Image Processing, pp. 715–739. Elsevier (2009)

    Google Scholar 

  10. Duda, R.O., Hart, P.E.: Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 15(1), 11–15 (1972)

    Article  Google Scholar 

  11. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)

    Article  MathSciNet  Google Scholar 

  12. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)

    Google Scholar 

  13. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

    Google Scholar 

  14. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988. IEEE (2017)

    Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  16. Hofbauer, H., Alonso-Fernandez, F., Wild, P., Bigun, J., Uhl, A.: A ground truth for iris segmentation. In: 2014 22nd International Conference on Pattern Recognition (ICPR), pp. 527–532. IEEE (2014)

    Google Scholar 

  17. Hough, P.V.: Method and means for recognizing complex patterns. US Patent 3,069,654, 18 December 1962

    Google Scholar 

  18. Illingworth, J., Kittler, J.: A survey of the Hough transform. Comput. Vis. Graph. Image Process. 44(1), 87–116 (1988)

    Article  Google Scholar 

  19. Jalilian, E., Uhl, A.: Iris segmentation using fully convolutional encoder–decoder networks. In: Bhanu, B., Kumar, A. (eds.) Deep Learning for Biometrics. ACVPR, pp. 133–155. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61657-5_6

    Chapter  Google Scholar 

  20. Jalilian, E., Uhl, A., Kwitt, R.: Domain adaptation for CNN based irissegmentation. In: BIOSIG (2017)

    Google Scholar 

  21. Jeong, D.S., et al.: A new iris segmentation method for non-ideal iris images. Image Vis. Comput. 28(2), 254–260 (2010)

    Article  Google Scholar 

  22. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988)

    Article  Google Scholar 

  23. Kumar, A., Passi, A.: Comparison and combination of iris matchers for reliable personal authentication. Pattern Recogn. 43(3), 1016–1026 (2010)

    Article  Google Scholar 

  24. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR, vol. 1, p. 4 (2017)

    Google Scholar 

  25. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  26. Liu, N., Li, H., Zhang, M., Liu, J., Sun, Z., Tan, T.: Accurate iris segmentation in non-cooperative environments using fully convolutional networks. In: 2016 International Conference on Biometrics (ICB), pp. 1–8. IEEE (2016)

    Google Scholar 

  27. Proenca, H., Filipe, S., Santos, R., Oliveira, J., Alexandre, L.: The UBIRIS.v2: a database of visible wavelength images captured on-the-move and at-a-distance. IEEE Trans. PAMI 32(8), 1529–1535 (2010). https://doi.org/10.1109/TPAMI.2009.66

    Article  Google Scholar 

  28. Proenca, H.: Iris recognition: on the segmentation of degraded images acquired in the visible wavelength. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1502–1516 (2010)

    Article  Google Scholar 

  29. Proença, H., Alexandre, L.A.: The NICE. I: noisy iris challenge evaluation-part I. In: 2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems, pp. 1–4. IEEE September 2007

    Google Scholar 

  30. Radman, A., Zainal, N., Suandi, S.A.: Automated segmentation of iris images acquired in an unconstrained environment using HOG-SVM and GrowCut. Digit. Signal Process. 64, 60–70 (2017)

    Article  MathSciNet  Google Scholar 

  31. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)

    Google Scholar 

  32. Tan, C.W., Kumar, A.: Unified framework for automated iris segmentation using distantly acquired face images. IEEE Trans. Image Process. 21(9), 4068–4079 (2012)

    Article  MathSciNet  Google Scholar 

  33. Tan, C.W., Kumar, A.: Towards online iris and periocular recognition under relaxed imaging constraints. IEEE Trans. Image Process. 22(10), 3751–3765 (2013)

    Article  MathSciNet  Google Scholar 

  34. Tan, T., Sun, Z.: Casia iris v4 interval. http://biometrics.idealtest.org/

  35. Vezhnevets, V., Konouchine, V.: GrowCut: interactive multi-label ND image segmentation by cellular automata. In: Proceedings of Graphicon, vol. 1, no. 4, pp. 150–156. June 2005

    Google Scholar 

  36. waleedka: Mask R-CNN (2017). https://github.com/matterport/Mask_RCNN

  37. Wildes, R.P.: Iris recognition: an emerging biometric technology. Proc. IEEE 85(9), 1348–1363 (1997)

    Article  Google Scholar 

  38. Zhao, Z., Kumar, A.: An accurate iris segmentation framework under relaxed imaging constraints using total variation model. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 3828–3836. IEEE (2015)

    Google Scholar 

Download references

Acknowledgement

The authors would like to thank Comcast Inc. and Synchrony Financial for partial support of this research. The authors would like to thank the reviewers for their constructive feedback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sohaib Ahmad .

Editor information

Editors and Affiliations

A Additional Related Work

A Additional Related Work

As discussed in the introduction, segmentation algorithms can be classified into specialized, hybrid, and learning. We provide an overview of specialized and hybrid approaches below.

Specialized Approaches. The first generation of segmentation algorithms assumed that the iris and pupil are circular. Under this assumption circle/ellipse fitting techniques are used to find iris and pupil boundaries. Daugman’s seminal work uses an integro-differential operator for iris segmentation [9], in essence it tries to fit a circle over an iris exhaustively. Subsequent algorithms are based on integro-differential operators and the Hough transform [10, 17, 18]. Circle fitting via the Hough transform involves different regions voting for the best circle (which can be seen as an exhaustive search over a possible circle) [37]. New segmentation methods use different techniques to find candidate circles (for example, see Tan and Kumar [33]).

Methods that assume the iris and pupil are elliptical perform well in constrained environments where the individual faces the camera and actively participates in image collection. However, these methods perform poorly in unconstrained environments. As an example, in blurred iris images these algorithms fixate or diverge harshly, converging to edges besides the iris boundary. Substantial pre-processing and post-processing is necessary to use these methods in unconstrained environments.

Improvements to ellipse fitting have come in the form of generalizes structure tensors or GST [2]. GSTs find an iris specific complex circular pattern and convolve this pattern with the collected image to find the iris and pupil. The GST method allows the discovered region to only approximate a circle. Zhao and Kumar first use a total variation model to regularize local variations [38] and then feed this image into circular Hough transform. Interestingly, Zhao and Kumar process the lower half and the upper half of the iris separately.

Active contours [22] are used for general segmentation tasks. Instead of circle fitting, active contours find a high gradient between two sections in an image to indicate a boundary. Abdullah et al. show that with some iris specific modifications, active contours segment well [1]. While active contours do not assume the collected iris is circular they can still fixate on reflections and occlusions.

Hybrid Approaches. Machine learning techniques including neural networks have penetrated into fields that used specialized approaches. Many works segment the iris using a mix of general learning techniques and specialized iris techniques. The common approach is to use learning algorithms to do rough segmentation and post-process via specialized techniques to get the final segmented iris.

Proenca performed coarse iris segmentation using a neural network classifier and refined this segmentation via polynomial fitting [28]. Radman et al. use a HoG (Histogram of Gradients) as input features to train a support vector machine (SVM) [30]. This trained SVM is then used on new images to localize the iris. Subsequent iris segmentation is done by employing the Growcut algorithm [35] which labels pixels based on an initial guess. Adaboost [11] based eye detection is common for identifying the iris region inside a facial image, Joeng et al. [21] refined this technique by adding eyelid and occlusion removal algorithms. Tan and Kumar use Zernike moments as input features to train a neural network and SVM for coarse iris pixel classification [32].

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ahmad, S., Fuller, B. (2019). Unconstrained Iris Segmentation Using Convolutional Neural Networks. In: Carneiro, G., You, S. (eds) Computer Vision – ACCV 2018 Workshops. ACCV 2018. Lecture Notes in Computer Science(), vol 11367. Springer, Cham. https://doi.org/10.1007/978-3-030-21074-8_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-21074-8_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-21073-1

  • Online ISBN: 978-3-030-21074-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics