Advertisement

Analysis of the Impact of Ear Alignment on Unconstrained Ear Recognition

  • Elaine Grenot-CastellanoEmail author
  • Yoanna Martínez-Díaz
  • Francisco José Silva-Mata
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11896)

Abstract

The use of the ear in biometric recognition has been widely covered in controlled environments. However, the advantages of the ear as a biometric characteristic impose the need to know how it behaves in unconstrained scenarios, where it is common the presence of occlusions, pose variations, illumination changes and different resolutions. According to this challenge and considering the experience in other biometric recognition processes, the alignment has shown to be a key step. In this work, we carry out an exhaustive and detailed study of the impact of the alignment on the performance of several state-of-the-art ear descriptors, when the images are captured in uncontrolled conditions. Our analysis is based on identification experiments against different types of variations in ears image of the challenging UERC dataset. The obtained results corroborate the hypothesis of the alignment also improves the efficacy of the ear recognition process and show how this improvement behaves for various factors such as head rotation, occlusions, flipping and resolution.

Keywords

Ear alignment Unconstrained ear recognition Covariates 

References

  1. 1.
    Abaza, A., Harrison, M.A.F.: Ear recognition: a complete system. In: Biometric and Surveillance Technology for Human and Activity Identification X, vol. 8712, p. 87120N (2013)Google Scholar
  2. 2.
    Dodge, S., Mounsef, J., Karam, L.: Unconstrained ear recognition using deep neural networks. IET Biom. 7(3), 207–214 (2018)CrossRefGoogle Scholar
  3. 3.
    Emeršič, Ž., Križaj, J., Štruc, V., Peer, P.: Deep ear recognition pipeline. In: Hassaballah, M., Hosny, K.M. (eds.) Recent Advances in Computer Vision. SCI, vol. 804, pp. 333–362. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-03000-1_14CrossRefGoogle Scholar
  4. 4.
    Emeršič, Ž., et al.: The unconstrained ear recognition challenge. In: IEEE IJCB, pp. 715–724 (2017)Google Scholar
  5. 5.
    Emeršič, Ž., Štruc, V., Peer, P.: Ear recognition: more than a survey. Neurocomputing 255, 26–39 (2017)CrossRefGoogle Scholar
  6. 6.
    Emeršič, Ž., et al.: The unconstrained ear recognition challenge 2019-arxiv version with appendix. arXiv preprint arXiv:1903.04143 (2019)
  7. 7.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Hansley, E.E., Segundo, M.P., Sarkar, S.: Employing fusion of learned and handcrafted features for unconstrained ear recognition. IET Biom. 7(3), 215–223 (2018)CrossRefGoogle Scholar
  9. 9.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  10. 10.
    Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
  11. 11.
    Oravec, M., et al.: Mobile ear recognition application, pp. 1–4 (2016)Google Scholar
  12. 12.
    Pflug, A., Busch, C.: Segmentation and normalization of human ears using cascaded pose regression. In: Bernsmed, K., Fischer-Hübner, S. (eds.) NordSec 2014. LNCS, vol. 8788, pp. 261–272. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-11599-3_16CrossRefGoogle Scholar
  13. 13.
    Ribič, M., Emeršič, Ž., Štruc, V., et al.: Influence of alignment on ear recognition: case study on awe dataset. In: International Electrotechnical and Computer Science Conference (2016)Google Scholar
  14. 14.
    Shu-zhong, W.: An improved normalization method for ear feature extraction. IJSIP 6(5), 49–56 (2013)CrossRefGoogle Scholar
  15. 15.
    Yazdanpanah, A.P., Faez, K.: Normalizing human ear in proportion to size and rotation. In: Huang, D.-S., Jo, K.-H., Lee, H.-H., Kang, H.-J., Bevilacqua, V. (eds.) ICIC 2009. LNCS, vol. 5754, pp. 37–45. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-04070-2_5CrossRefGoogle Scholar
  16. 16.
    Yuan, L., Zhao, H., Zhang, Y., Wu, Z.: Ear alignment based on convolutional neural network. In: Zhou, J., et al. (eds.) CCBR 2018. LNCS, vol. 10996, pp. 562–571. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-97909-0_60CrossRefGoogle Scholar
  17. 17.
    Zhou, Y., Zaferiou, S.: Deformable models of ears in-the-wild for alignment and recognition. In: 12th IEEE International Conference on Automatic Face & Gesture Recognition, pp. 626–633 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Elaine Grenot-Castellano
    • 1
    Email author
  • Yoanna Martínez-Díaz
    • 1
  • Francisco José Silva-Mata
    • 1
  1. 1.Advanced Technologies Application CenterHavanaCuba

Personalised recommendations