Advertisement

Pattern Recognition and Image Analysis

, Volume 25, Issue 3, pp 461–465 | Cite as

Simultaneous classification of several features of a person’s appearance using a deep convolutional neural network

  • A. I. Kukharenko
  • A. S. Konushin
Applied Problems

Abstract

In this paper, we describe a model of a convolutional neural network for automatic simultaneous extraction of several features of the person’s appearance from an image. The proposed model has the form of a deep convolutional neural network with common initial layers and several probabilistic outputs. This neural network has the high accuracy of a convolutional network and can simultaneously extract a number of features of the appearance in the time that is required to extract one feature of the appearance. The neural network is tested using photographs from the LWF database. As features of the appearance of a person, we use the person’s sex, a mustache, and a beard. The accuracy of identifying each feature is no less than 91.5%, which is one of the best results for the LWF database. This model of a neural network can be used for simultaneous identification of a greater number of features of the person’s appearance without a significant increase in operating time.

Keywords

model of convolutional neural network 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    B. A. Golomb, D. T. Lawrence, and T. J. Sejnowski, “Sexnet: A neural network identifies sex from human faces,” in Proc. Conf. Advances in Neural Information Processing Systems (NIPS) (Denver, 1991), pp. 572–577.Google Scholar
  2. 2.
    B. Moghaddam and M. Yang, “Learning gender with support faces,” IEEE Trans. Pattern Anal. Mach. Intell. 24 (5), 707–711 (2002).CrossRefGoogle Scholar
  3. 3.
    S. Baluja and H. A. Rowley, “Boosting sex identification performance,” Int. J. Comput. Vision 71 (1), 111–119 (2007).CrossRefGoogle Scholar
  4. 4.
    T. Ahonen, A. Hadid, and M. Pietikäinen, “Face recognition with local binary patterns,” in Proc. Europ. Conf. on Computer Vision (ECCV) (Prague, 2004), pp. 469–481.Google Scholar
  5. 5.
    G. Guo, G. Mu, Y. Fu, and T. S. Huang, “Human age estimation using bioinspired features,” in Proc. Conf. on Computer Vision and Pattern Recognition (Miami, 2009), pp. 112–119.Google Scholar
  6. 6.
    G. E. Hinton, S. Osindero, and Y. The, “A fast learning algorithm for deep belief nets,” Neural Comput. 18, 1527–1554 (2006).zbMATHMathSciNetCrossRefGoogle Scholar
  7. 7.
    A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks NIPS: Neural Information Processing Systems (Lake Tahoe, NV, 2012).Google Scholar
  8. 8.
    P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun, “Pedestrian detection with unsupervised multi-stage feature learning,” CoRR 1212 (142), (2012).Google Scholar
  9. 9.
    P. Sermanet, S. Chintala, and Y. LeCun, “Convolutional neural networks applied to house numbers digit classification,” in Proc. Int. Conf. on Pattern Recognition (Vilamoura, Algarve, 2012).Google Scholar
  10. 10.
    A. I. kukharenko and A. S. Konushin, “The way to detect human gender by means of deep neuron networks,” in Proc. Sci.-Tech. Conf. Technical Vision in Control Systems, Ed. by R. R. Nazirov (Mat. Metody Obrab. Anal. Izobr., Moscow, 2012), pp. 146–148 [in Russian].Google Scholar
  11. 11.
    B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Tech. Rep. (Univ. of Massachusetts, Amherst, 2007), pp. 7–49.Google Scholar
  12. 12.
    N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, “Attribute and simile classifiers for face verification,” in Proc. 12th IEEE Int. Conf. on Computer Vision (ICCV) (Kyoto, 2009).CrossRefGoogle Scholar
  13. 13.
    I. B. Gurevich and I. V. Koryabkina, “Image classification method based on image informational characteristics,” Pattern Recogn. Image Anal.: Adv. Math. Theory Appl. 13 (1), 103–105 (2003).Google Scholar
  14. 14.
    Y. LeCun, L. Bottou, G. B. Orr, and K. R. Muller, “Efficient backprop,” in Neural Networks: Tricks of the Trade (Springer, 1998).CrossRefGoogle Scholar
  15. 15.
    N. Kumar, P. N. Belhumeur, and S. K. Nayar, “Facetracer: a search engine for large collections of images with faces,” in Proc. European Conf. on Computer Vision (ECCV) (Marseille, 2008), pp. 340–353.Google Scholar
  16. 16.
    C. Shan, “Learning local binary patterns for gender classification on realworld face images,” Pattern Recogn. Lett. 33 (4), 431–437 (2012).CrossRefGoogle Scholar
  17. 17.
    V. Konushin, T. Lukina, A. Kukharenko, and A. Konushin, “Person classification upon face image based on simile classifiers,” Syst. Means Inf. 23 (2), 50–61 (2013).Google Scholar

Copyright information

© Pleiades Publishing, Ltd. 2015

Authors and Affiliations

  1. 1.Department of Computational Mathematics and CyberneticsMoscow State UniversityMoscowRussia

Personalised recommendations