Facial Trait Code and Its Application to Face Recognition

  • Ping-Han Lee
  • Gee-Sern Hsu
  • Tsuhan Chen
  • Yi-Ping Hung
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5359)


We propose the Facial Trait Code (FTC) to encode human facial images. The proposed FTC is motivated by the discovery of the basic types of local facial features, called facial trait bases, which can be extracted from a large number of faces. In addition, the fusion of these facial trait bases can accurately capture the appearance of a face. Extraction of the facial trait bases involves clustering and boosting approaches, leading to the best discrimination of the human faces. The extracted facial trait bases are symbolized and make up the n-ary facial trait codes. A given face can be then encoded at the patches specified by the traits to render an n-ary facial trait code with each symbol in its codeword corresponding to the closest trait base. We applied FTC to a typical face identification problem, and it yielded satisfactory results under different illumination conditions.


Face Recognition Facial Image Illumination Normalization Adaptive Histogram Equalization Patch Sample 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Heisele, B., Ho, P., Wu, J., Poggio, T.: Face recognition: component-based versus global approaches. CVIU 91, 6–12 (2003)Google Scholar
  2. 2.
    Pentland, A., Moghaddam, B., Starner, T.: View-based and modular eigenspaces for face recognition. In: CVPR, pp. 84–91 (1994)Google Scholar
  3. 3.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: CVPR, vol. 1, pp. 511–518 (2001)Google Scholar
  4. 4.
    Xie, C., Kumar, B.V.: Face class code based feature extraction for face recognition. In: Fourth IEEE Workshop on Publication Automatic Identification Advanced Technologies, pp. 257–262 (2005)Google Scholar
  5. 5.
    Figueiredo, M., Jain, A.: unsupervised learning of finite mixture models. PAMI 24, 381–396 (2002)CrossRefGoogle Scholar
  6. 6.
    Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research 2, 263–286 (1995)zbMATHGoogle Scholar
  7. 7.
    Grother, P.J., Micheals, R.J., Phillips, P.J.: Face recognition vendor test 2002 performance metrics. In: AVBPA (2003)Google Scholar
  8. 8.
    Martinez, A., Benavente, R.: The ar face database. Technical Report 24, CVC (1998)Google Scholar
  9. 9.
    Messer, K., Matas, J., Kittler, J., Luettin, J., Maitre, G.: Xm2vtsdb: The extended m2vts database. In: AVBPA (1999)Google Scholar
  10. 10.
    Zuiderveld, K.: Contrast limited adaptive histogram equalization, pp. 474–485 (1994)Google Scholar
  11. 11.
    Turk, M., Pentland, A.: Eigenfaces for recognition. Journal of Cognitive Neuroscience 3, 71–86 (1991)CrossRefGoogle Scholar
  12. 12.
    Belhumeur, P., Hespanha, J., Kriegman, D.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. PAMI 19, 711–720 (1997)CrossRefGoogle Scholar
  13. 13.
    Guo, G., Li, S.Z., Chan, K.: Face recognition by support vector machines. In: Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 196–201 (2000)Google Scholar
  14. 14.
    Guruswami, V., Sahai, A.: Multiclass learning, boosting, and error-correcting codes. In: Proceedings of the twelfth annual conference on Computational learning theory, pp. 145–155 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Ping-Han Lee
    • 1
  • Gee-Sern Hsu
    • 2
  • Tsuhan Chen
    • 3
  • Yi-Ping Hung
    • 1
  1. 1.National Taiwan UniversityROC
  2. 2.National Taiwan University of Science and TechnologyROC
  3. 3.Carnegie Mellon UniversityUSA

Personalised recommendations