Advertisement

Journal of Real-Time Image Processing

, Volume 3, Issue 1–2, pp 109–116 | Cite as

Automatic gender recognition based on pixel-pattern-based texture feature

  • Huchuan Lu
  • Yingjie Huang
  • Yenwei Chen
  • Deli Yang
Original Research Paper

Abstract

A pixel-pattern-based texture feature (PPBTF) is proposed for real-time gender recognition. A gray-scale image is transformed into a pattern map where edges and lines are to be used for characterizing the texture information. On the basis of the pattern map, a feature vector is comprised the numbers of the pixels belonging to each pattern. We use the image basis functions obtained by principal component analysis (PCA) as the templates for pattern matching. The characteristics of the feature are comprehensively analyzed through an application to gender recognition. Adaboost is used to select the most discriminative feature subset, and support vector machine (SVMs) is adopted for classification. Performed on frontal images from FERET database, the comparisons with Gabor show that PPBTF is a significant facial representation, quite effective and speedier in computation.

Keywords

Gender recognition PPBTF Adaboost Gabor filter SVM classifier 

Notes

Acknowledgments

Portions of the research in this article use the FERET database of facial images collected under the FERET program.

References

  1. 1.
    Golomb B., Lawrence D., Sejnowski T.: Sexnet: a neural network identifies sex from human faces. In: Advances in Neural Information Processing Systems, pp. 572–577. Morgan Kaufmann, San Mateo (1991)Google Scholar
  2. 2.
    Tamura S.H., Kawai, Mitsumoto H.: Male/female identification from 8 × 6 very low resolution face images by neural network. Pattern Recognit. 29, 331–335 (1996)CrossRefGoogle Scholar
  3. 3.
    Gutta, S., Weschler, H., Phillips, P.J.: Gender and ethnic classification of human faces using hybrid classifiers. In: Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 194–199 (1998)Google Scholar
  4. 4.
    Wu B., Ai H., Huang C.: LUT-based Adaboost for gender classification. AVBPA 2688, 104–110 (2003)Google Scholar
  5. 5.
    Moghaddam B., Yang M.H.: Gender classification with support vector machines. In: IEEE Trans. PAMI 24, 707–711 (2002)Google Scholar
  6. 6.
    Baluja S., Rowley H.A.: Boosting sex identification performance. Int. J. Comput. Vis. 71, 111–119 (2007)CrossRefGoogle Scholar
  7. 7.
    Jain, A., Huang, J., Fang, S.: Gender identification using frontal facial images. In: Proceedings of IEEE Conference on Multimedia and Expo, pp. 1082–1085 (2005)Google Scholar
  8. 8.
    Burton A., Bruce V., Dench N.: What’s the difference between men and women? Evidence from facial measurements. Perception 22, 153–176 (1993)CrossRefGoogle Scholar
  9. 9.
    Brunelli, R., Poggio, T.: Hyberbf networks for gender classification. In: DARPA Image understanding Workshop, pp. 311–314 (1992)Google Scholar
  10. 10.
    Sun, Z., Bebis, G., Yuan, X., Louis, S.J.: Genetic feature subset selection for gender classification: a comparison study. In: 6th IEEE Workshop on Applications of Computer Vision, pp. 165–170. IEEE Computer Society, Orlando (2002)Google Scholar
  11. 11.
    Walavalkar L., Yeasin M., Arasinmhmurthy A., Sharma R.: Support vector learning for gender classification using audio visual cues: a comparison. Pattern Recognit Artif Intell. 17, 417–439 (2003)CrossRefGoogle Scholar
  12. 12.
    Zeng X.Y., Chen Y.W., Nakao Z., Lu H.: Texture representations based on pattern maps. Signal Process. 84, 589–599 (2004)CrossRefzbMATHGoogle Scholar
  13. 13.
    Tian, Y.: Evaluation of face resolution for expression analysis. In: IEEE Workshop on Face Processing in Video (2004)Google Scholar
  14. 14.
    Hancock P.J.B., Baddeley R.J., Smith L.S.: The principal components of natural images. Network 3, 61–70 (1992)CrossRefGoogle Scholar
  15. 15.
    Bell A.J., Sejnowski T.J.: The ‘independent components’of natural scenes are edge filters. Vision Res. 37, 3327–3338 (1997)CrossRefGoogle Scholar
  16. 16.
    Turk M., Pentland A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3, 71–86 (1991)CrossRefGoogle Scholar
  17. 17.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of IEEE conference on Computer Vision and Pattern Recognition, pp. 511–518 (2001)Google Scholar
  18. 18.
    Littlewort G., Bartlett M.S., Fasel I., Movellan J.R.: Real time face detection and facial expression recognition: development and applications to human computer interaction. CVPRW 5, 53 (2003)Google Scholar
  19. 19.
    Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)Google Scholar
  20. 20.
    Lu, H., Wu, P., Lin, H., Yang, D.: Automatic facial expression recognition. In: IEEE International Symposium on Neural Network, pp. 63–68 (2006)Google Scholar
  21. 21.
    Sim, T., Baker, S., Bsat, M.: The CMU pose, illumination, and expression (PIE) database of human faces. In: Report CMU-RI-TR-01–02, Robotics Institute, Carnegie Mellon University (2001)Google Scholar

Copyright information

© Springer-Verlag 2008

Authors and Affiliations

  • Huchuan Lu
    • 1
  • Yingjie Huang
    • 1
  • Yenwei Chen
    • 1
    • 2
  • Deli Yang
    • 3
  1. 1.Department of Electronic EngineeringDalian University of TechnologyDalian LiaoningChina
  2. 2.College of Information Science and EngineeringRitsumeikan UniversityKusatsuJapan
  3. 3.Department of ManagementDalian University of TechnologyDalian LiaoningChina

Personalised recommendations