International Journal of Social Robotics

, Volume 5, Issue 4, pp 627–639 | Cite as

Effects of 3D Shape and Texture on Gender Identification for a Retro-Projected Face Screen

  • Takaaki Kuratate
  • Marcia Riley
  • Gordon Cheng


Retro-projected face displays have recently appeared as an alternative to mechanical robot faces, and stand apart by virtue of their flexibility: they are able to present a variety of faces varying in both realism and individual appearance. Here we examine the role of both 3D mask structure and texture image quality on the perception of gender in one such platform, the Mask-bot. In our experiments, we use three specific gender face screens as the 3D output—female, male and average masks—and display various face images that are gradually morphed between female and male on these screens. Additionally, we present three cases of morphed images: high quality texture, low quality texture, and averaged face texture from low quality data. Experiments were carried out over several days. 15 subjects rated the gender of each face projected on the female mask screen, and 10 subjects rated the gender of faces on the male and average screens. We found that even though the 3D mask screens have strong gender specific face features, gender identification is strongly determined by high-quality texture images. However, in the absence of strong texture cues or the presence of ambiguous information, the influence of the output structure may become more important. These results allow us to ascertain the ability to faithfully represent faces on these new platforms, and highlight the most important aspects—in this case texture—for correct perception.


Face robot Facial animation 3D face Retro-projected face Gender identification 



This work was supported by the DFG cluster of excellence ‘Cognition for Technical systems—CoTeSys’ of Germany.

We also acknowledge ATR-International (Kyoto, Japan) and MARCS Institute (former MARCS Auditory Laboratories—Sydney, Australia) for accessing their 3D face databases for supporting this research.


  1. 1.
    Al Moubayed S, Alexandersson S, Beskow J, Granström B (2011) A robotic head using projected animated faces. In: Proceedings of the international conference on auditory-visual speech processing (AVSP 2011), p 69. Google Scholar
  2. 2.
    Bazo D, Vaidyanathan R, Lentz A, Melhuish C (2010) Design and testing of a hybrid expressive face for a humanoid robot. In: IEEE/RSJ international conference on intelligent robots and systems (IROS 2010), pp 5317–5322. doi: 10.1109/IROS.2010.5651469 CrossRefGoogle Scholar
  3. 3.
    Berns K, Hirth J (2006) Control of facial expressions of the humanoid robot head roman. In: IEEE/RSJ international conference on intelligent robots and systems (IROS 2006), pp 3119–3124. doi: 10.1109/IROS.2006.282331 CrossRefGoogle Scholar
  4. 4.
    Bruce V, Burton AM, Hanna E, Healey P, Mason O, Coombes A, Fright R, Linney A (1993) Sex discrimination: how do we tell the difference between male and female faces? Perception 22:131–152. doi: 10.1068/p220131 CrossRefGoogle Scholar
  5. 5.
    Campbell R, Benson P, Wallace S, Doesbergh S, Coleman M (1999) More about brows: How poses that change brow position affect perceptions of gender. Perception 28(4):489–504. doi: 10.1068/p2784 CrossRefGoogle Scholar
  6. 6.
    Cheng LC, Lin CY, Huang CC (2012) Visualization of facial expression deformation applied to the mechanism improvement of face robot. Int J Soc Robot 1–17. doi: 10.1007/s12369-012-0168-5
  7. 7.
    Delaunay F, de Greeff J, Belpaeme T (2009) Towards retro-projected robot faces: an alternative to mechatronic and android faces. In: 18th IEEE international symposium on robot and human interactive communication (RO-MAN 2009), pp 306–311. doi: 10.1109/ROMAN.2009.5326314 CrossRefGoogle Scholar
  8. 8.
    Delaunay F, de Greeff J, Belpaeme T (2011) Lighthead robotic face. In: 6th ACM/IEEE international conference on human-robot interaction (HRI 2011), p 101. Google Scholar
  9. 9.
    Dering B, Martin CD, Moro S, Pegna AJ, Thierry G (2011) Face-sensitive processes one hundred milliseconds after picture onset. Front Human Neurosci 5(93). doi: 10.3389/fnhum.2011.00093
  10. 10.
    Ekman P, Friesen WV (1978) Manual for the facial action coding system. Consulting Psychologists Press, Palo Alto Google Scholar
  11. 11.
    Kenneth I, Forster J (2003) DMDX: a windows display program with millisecond accuracy. Behav Res Methods Instrum Comput 35(1):116–124. doi: 10.3758/BF03195503 CrossRefGoogle Scholar
  12. 12.
    Hanson D (2006) Exploring the aesthetic range for humanoid robots. In: CogSci-2006 workshop: toward social mechanisms of android science Google Scholar
  13. 13.
    Hashimoto M, Morooka D (2006) Robotic facial expression using a curved surface display. J Robot Mechatron 18(4):504–510. Google Scholar
  14. 14.
    Hirth J, Schmitz N, Berns K (2012) Playing tangram with a humanoid robot. In: Proceedings of ROBOTIK 2012, 7th German conference on robotics, pp 1–6. Google Scholar
  15. 15.
    Ishiguro H (2010) Understanding humans by building androids. In: Proceedings of the SIGDIAL 2010 conference (W10-4330). Association for Computational Linguistics, Tokyo, p 175. Google Scholar
  16. 16.
    Jaeckel P, Campbell N, Melhuish C (2008) Facial behaviour mapping—from video footage to a robot head. Robot Auton Syst 56(12):1042–1049. doi: 10.1016/j.robot.2008.09.002 CrossRefGoogle Scholar
  17. 17.
    Kędzierski J, Muszyński R, Zoll C, Oleksy A, Frontkiewicz M (2013) EMYS—emotive head of a social robot. Int J Soc Robot 5(2):237–249. doi: 10.1007/s12369-013-0183-1 CrossRefGoogle Scholar
  18. 18.
    Kuratate T (2005) Statistical analysis and synthesis of 3D faces for auditory-visual speech animation. In: Proceedings of AVSP’05 (Auditory-Visual Speech Processing), pp 131–136. Google Scholar
  19. 19.
    Kuratate T, Masuda S, Vatikiotis-Bateson E (2001) What perceptible information can be implemented in talking head animations. In: 10th IEEE international workshop on robot and human interactive communication (RO-MAN 2001), pp 430–435. doi: 10.1109/ROMAN.2001.981942 Google Scholar
  20. 20.
    Kuratate T, Matsusaka Y, Pierce B, Cheng G (2011) “Mask-bot”: a life-size robot head using talking head animation for human-robot communication. In: 11th IEEE-RAS international conference on humanoid robots (Humanoids 2011), pp 99–104. doi: 10.1109/Humanoids.2011.6100842 CrossRefGoogle Scholar
  21. 21.
    Kuratate T, Pierce B, Cheng G (2011) “Mask-bot”—a life-size talking head animated robot for AV speech and human-robot communication research. In: Proceedings of the international conference on auditory-visual speech processing (AVSP 2011), pp 107–112. Google Scholar
  22. 22.
    Kuratate T, Riley M, Pierce B, Cheng G (2012) Gender identification bias induced with texture images on a life size retro-projected face screen. In: 21st IEEE international symposium on robot and human interactive communication (RO-MAN 2012), pp 43–48. doi: 10.1109/ROMAN.2012.6343729 CrossRefGoogle Scholar
  23. 23.
    Maejima A, Kuratate T, Pierce B, Morishima S, Cheng G (2012) Automatic face replacement for a humanoid robot with 3d face shape display. In: 12th IEEE-RAS international conference on humanoid robots (Humanoids 2012). Osaka, Japan, pp 469–474 Google Scholar
  24. 24.
    Mori M (1970) The uncanny valley. Energy 7(4):33–35 (in Japanese) Google Scholar
  25. 25.
    Munhall K, Jones J, Callan D, Kuratate T, Vatikiotis-Bateson E (2004) Visual prosody and speech intelligibility: head movement improves auditory speech perception. Psychol Sci 15(2):133–137. doi: 10.1111/j.0963-7214.2004.01502010.x CrossRefGoogle Scholar
  26. 26.
    Pierce B, Kuratate T, Maejima A, Morishima S, Matsusaka Y, Durkovic M, Diepold K, Cheng G (2012) Development of an integrated multi-modal communication robotic face. In: IEEE workshop on advanced robotics and its social impacts (ARSO), pp 101–102. doi: 10.1109/ARSO.2012.6213408 CrossRefGoogle Scholar
  27. 27.
    Pierce B, Kuratate T, Vogl C, Cheng G (2012) “Mask-Bot 2i”: an active customisable robotic head with interchangeable face. In: 12th IEEE-RAS international conference on humanoid robots (Humanoids 2012). Osaka, Japan, pp 520–525 Google Scholar
  28. 28.
    Pollick F (2010) In search of the uncanny valley. In: Daras P, Ibarra O (eds) User centric media. Lecture notes of the institute for computer sciences, social informatics and telecommunications engineering, vol 40. Springer, Berlin, pp 69–78. doi: 10.1007/978-3-642-12630-7_8 CrossRefGoogle Scholar
  29. 29.
    Turati C (2004) Why faces are not special to newborns: an alternative account of the face preference. Curr Dir Psychol Sci 13(1):5–8. doi: 10.1111/j.0963-7214.2004.01301002.x CrossRefGoogle Scholar
  30. 30.
    Vatikiotis-Bateson E, Kroos C, Kuratate T, Munhall KG, Pitermann M (2000) Task constraints on robot realism: the case of talking heads. In: 9th IEEE international workshop on robot and human interactive communication (RO-MAN 2000), pp 352–357. doi: 10.1109/ROMAN.2000.892522 Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Institute for Cognitive SystemsTechnische Universität MünchenMünchenGermany

Personalised recommendations