Multimedia Tools and Applications

, Volume 49, Issue 3, pp 545–565 | Cite as

Aligned texture map creation for pose invariant face recognition

  • Antonio Rama
  • Francesc Tarrés
  • Jürgen Rurainsky


In last years, Face recognition based on 3D techniques is an emergent technology which has demonstrated better results than conventional 2D approaches. Using texture (180° multi-view image) and depth maps is supposed to increase the robustness towards the two main challenges in Face Recognition: Pose and illumination. Nevertheless, 3D data should be acquired under highly controlled conditions and in most cases depends on the collaboration of the subject to be recognized. Thus, in applications such as surveillance or control access points, this kind of 3D data may not be available during the recognition process. This leads to a new paradigm using some mixed 2D-3D face recognition systems where 3D data is used in the training but either 2D or 3D information can be used in the recognition depending on the scenario. Following this concept, where only part of the information (partial concept) is used in the recognition, a novel method is presented in this work. This has been called Partial Principal Component Analysis (P2CA) since they fuse the Partial concept with the fundamentals of the well known PCA algorithm. This strategy has been proven to be very robust in pose variation scenarios showing that the 3D training process retains all the spatial information of the face while the 2D picture effectively recovers the face information from the available data. Furthermore, in this work, a novel approach for the automatic creation of 180° aligned cylindrical projected face images using nine different views is presented. These face images are created by using a cylindrical approximation for the real object surface. The alignment is done by applying first a global 2D affine transformation of the image, and afterward a local transformation of the desired face features using a triangle mesh. This local alignment allows a closer look to the feature properties and not the differences. Finally, these aligned face images are used for training a pose invariant face recognition approach (P2CA).


3D Face recognition Partial PCA PCA Eigenfaces Image stitching Image alignment 



The work presented was developed within VISNET II, a European Network Of Excellence funded under the EC IST FP6 programme.


  1. 1.
    Akimoto T, Suenaga Y, Wallace RS (1993) Automatic creation of 3D facial models. IEEE Comput Graph Appl 13(5):16–22CrossRefGoogle Scholar
  2. 2.
    Blanz V, Vetter T (2003) Face recognition based on fitting 3D morphable model. IEEE Trans Pattern Anal Mach Intell 25(9):1063–1074CrossRefGoogle Scholar
  3. 3.
    Bowyer K, Chang K, Flynn P (2004) A survey of approaches to 3D and multi-modal 3D+2D face recognition. IEEE IntlConf on Pattern RecognitionGoogle Scholar
  4. 4.
    Bronstein AM, Bronstein MM, Kimmel R (2005) Three-dimensional face recognition. Int J Comput Vis 64(1):5–30CrossRefGoogle Scholar
  5. 5.
    Brown LM (2001) 3D head tracking using motion adaptive texture-mapping. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 998–1003, Hawaii, December 2001. IEEE Computer SocietyGoogle Scholar
  6. 6.
    Cascia ML, Isidoro J, Sclaroff S (1998) Head tracking via robust registration in texture map images. In Proceedingsof the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, page 508, Santa Barbara, CA, USA, June 1998. IEEE Computer SocietyGoogle Scholar
  7. 7.
    Chang KI, Bowyer KW, Flynn PJ (2005) An evaluation of multimodal 2D+3D face biometrics. IEEE Trans Pattern Anal Mach Intell 27:619–624CrossRefGoogle Scholar
  8. 8.
    Farkas LG (1995) Anthropometry of the head and face, 2nd edition. Raven PressGoogle Scholar
  9. 9.
    Kouzani A, Sammut K (1999) Quadtree principal component analysis and its application to facial expression classification. In Proceedings of the IEEE International Conference on Systems,Man, and Cybernetics, pages 835–839, Hawaii, October 1999Google Scholar
  10. 10.
    Lee W-S, Magnenat-Thalmann N (2000) Fast head modeling for animation. Image Vis Comput 18(4):355–364, (10), MarchCrossRefGoogle Scholar
  11. 11.
    Lewis JP (1995) Fast normalized cross-correlation. Vision InterfaceGoogle Scholar
  12. 12.
    Onofrio D, Rama A, Tarres F, Tubaro S (2006) P2CA: how much information is needed. IEEE International Conference on Image Processing, Atlanta, USA, October 2006Google Scholar
  13. 13.
    Phillips PJ, Grother P, Micheals R, Blackburn D, Tabassi E, Bone J (2003) Face recognition vendor Test 2002: evaluation report. Technical Report NISTIR 6965, National Institute of Standards and TechnologyGoogle Scholar
  14. 14.
    Pretzel, Lotz (2007) Research project: “face recognition as a search tool” technical report. Bundeskriminalamt, WiesbadenGoogle Scholar
  15. 15.
    Rama A, Tarrés F (2005) P2CA: a new face recognition scheme combining 2D and 3D information. IEEE International Conference on Image Processing, Genova, Italy, September 11–14, 2005Google Scholar
  16. 16.
    Savvides M, Kumar BV, Khosla PK (2004) Eigenphases vs. eigenfaces. Int Conf on Pattern Recognition, Washington DCGoogle Scholar
  17. 17.
    Scales LE (1985) Introduction to non-linear optimization. Springer-Verlag New York, Inc., New YorkGoogle Scholar
  18. 18.
    Scharstein D, Szeliski R (2002) A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int J Comput Vis 47(1):7–42zbMATHCrossRefGoogle Scholar
  19. 19.
    Soh AWK, Yu Z, Prakash EC, Chan TKY, Sung E (2002) Texture mapping of 3D human face for virtual reality environments. Int J Inform Tech 8(2):54–65Google Scholar
  20. 20.
    Szeliski R (2004) Image alignment and stitching: a tutorial. Technical Report MSR-TR-2004-92, Microsoft Research, December 2004Google Scholar
  21. 21.
    Tsapatsoulis N, Doulamis N, Doulamis A, Kollias S (1998) Face extraction from non-uniform background and recognition in compressed domain. IEEE ICASSP, Seattle, WA, USA, May 1998Google Scholar
  22. 22.
    Turk MA, Pentland AP (1991) Face recognition using eigenfaces. Proceedings of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 586–591, Maui, Hawaii 1991Google Scholar
  23. 23.
    “UPC Face Database” in
  24. 24.
    Wang S, Wang Y, Jin M, Gu XD, Samaras D (2007) Conformal geometry and its applications on 3D shape matching, recognition, and stitching. IEEE Trans Pattern Anal Mach Intell 29(7):1209–1220CrossRefGoogle Scholar
  25. 25.
    Xiao J, Kanade T, Cohn JF (2003) Robust full-motion recovery of head by dynamic templates and re-registration techniques. Int J Imag Syst Tech 13:85–94CrossRefGoogle Scholar
  26. 26.
    Yang J, Zhang D, Frangi AF, Yang J (2004) Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans on Pattern Analysis and Machine Intel., Jan. 2004Google Scholar
  27. 27.
    Young JW (1993) Head and face anthropometry of adult U.S. civilians. Technical Report ADA268661, Civil Aeromedical Institute, Federal Aviation AdministrationGoogle Scholar
  28. 28.
    Zhao W, Chellapa R (2006) Face processing: advanced modeling and methods. Academic PressGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Antonio Rama
    • 1
  • Francesc Tarrés
    • 1
  • Jürgen Rurainsky
    • 2
  1. 1.Department of Signal Theory and CommunicationsUniversitat Politècnica de Catalunya (UPC)BarcelonaSpain
  2. 2.Image Processing Department, Fraunhofer Institute for TelecommunicationsHeinrich-Hertz-Institut (HHI)BerlinGermany

Personalised recommendations