Quality and Demographic Investigation of ICE 2006

Part of the Advances in Computer Vision and Pattern Recognition book series (ACVPR)


There have been four major experimental evaluations of iris recognition technology in recent years: the ITIRT evaluation conducted by the International Biometric Group, the Iris ’06 evaluation conducted by Authenti-Corp, and the Iris Challenge Evaluation (ICE) 2006 and Iris Exchange (IREX) conducted by the National Institute of Standards and Technology. These experimental evaluations employed different vendor technologies and experimental specifications, but yield consistent results in the areas where the specifications intersect. In the ICE 2006, participants were allowed to submit quality measures. We investigate the properties of their quality submissions.


Quality Measure Iris Image Quality Module False Reject Rate Iris Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



We acknowledge the support of Department of Homeland Security’s Science and Technology Department and Transportation Security Administration (TSA), the Director of National Intelligence’s Information Technology Innovation Center, the Federal Bureau of Investigation (FBI), the National Institute of Justice, and the Technical Support Working Group (TSWG). This work was undertaken during PJF’s sabbatical leave at NIST. The support of NIST is gratefully acknowledged. Biometrics research at the University of Notre Dame is supported by the National Science Foundation under grant CNS01-30839, by the US Department of Justice, and by UNISYS Corp. The identification of any commercial product or trade name does not imply endorsement or recommendation by the National Institute of Standards and Technology or the U. of Notre Dame.


  1. 1.
    Authenti-Corp, Iris recognition study 2006 (IRIS06). Technical report. Version 0.40. Authenti-Corp (2007)Google Scholar
  2. 2.
    J.R. Beveridge et al., Factors that influence algorithm performance in the Face Recognition Grand Challenge. Comput. Vis. Image Underst. 113, 750–762 (2009)CrossRefGoogle Scholar
  3. 3.
    P. Grother, E. Tabassi, Performance of biometric quality measures. IEEE Trans. PAMI 29, 531–543 (2007)CrossRefGoogle Scholar
  4. 4.
    P.J. Grother et al., IREX I Performance of iris recognition algorithms on standard images. Technical report. NISTIR 7629. National Institute of Standards and Technology (2009)Google Scholar
  5. 5.
    International Biometric Group, Independent Testing of Iris Recognition Technology. Technical report. International Biometric Group (2005)Google Scholar
  6. 6.
    E.M. Newton, P.J. Phillips, Meta-analysis of third-party evaluations of iris recognition. IEEE Trans. SMC-A 39(1), 4–11 (2009)Google Scholar
  7. 7.
    P.J. Phillips et al., Face recognition vendor test 2002: evaluation report. Technical report. NISTIR 6965. National Institute of Standards and Technology (2003).
  8. 8.
    P.J. Phillips et al., FRVT 2006 and ICE 2006 large-scale results. IEEE Trans. PAMI 32(5), 831–846 (2010)CrossRefGoogle Scholar
  9. 9.
    P. Phillips et al., The iris challenge evaluation 2005, in Second IEEE International Conference on Biometrics: Theory, Applications, and Systems (2008)Google Scholar

Copyright information

© Springer-Verlag London 2016

Authors and Affiliations

  1. 1.National Institute of Standards and TechnologyGaithersburgUSA
  2. 2.University of Notre DameNotre DameUSA

Personalised recommendations