Advertisement

International Journal of Social Robotics

, Volume 5, Issue 4, pp 423–439 | Cite as

Visualization of Facial Expression Deformation Applied to the Mechanism Improvement of Face Robot

  • Li-Chieh Cheng
  • Chyi-Yeu LinEmail author
  • Chun-Chia Huang
Article

Abstract

The static and dynamic realistic effects of the appearance are essential but challenging targets in the development of human face robots. Human facial anatomy is the primary theoretical foundation for designing the facial expressional mechanism in most existent human face robots. Based on the popular study of facial action units, actuators are arranged to connect to certain control points underneath the facial skin in prearranged directions to mimic the facial muscles involved in generating facial expressions. Most facial robots fail to generate realistic facial expressions because there are significant differences in the method of generating expressions between the contracting muscles and inner tissues of human facial skin and the wire pulling of a single artificial facial skin. This paper proposes a unique design approach, which uses reverse engineering techniques of three dimensional measurement and analysis, to visualize some critical facial motion data, including facial skin localized deformations, motion directions of facial features, and displacements of facial skin elements on a human face in different facial expressional states. The effectiveness and robustness of the proposed approach have been verified in real design cases on face robots.

Keywords

Face robot Facial expression analysis Facial expression generation Facial expression deformation 

Notes

Acknowledgements

This research was financially funded by the National Science Council of the Republic of China (Taiwan) under grant numbers: NSC 94-2212-E-011-032 and NSC 96-2218-E-011-002. Their support made this research and the outcome possible.

Supplementary material

(MPG 17.4 MB)

References

  1. 1.
    Hashimoto T, Senda M, Shiiba T, Kobayashi H (2004) Development of the interactive receptionist system by the face robot. In: Proceedings of the SICE annual conference (SICE), Sapporo, Japan, pp 1404–1408 Google Scholar
  2. 2.
    Kokoro Company Ltd. (2006) Kokoro news No 65. Retrieved 20 April 2012. http://www.kokoro-dreams.co.jp/english/news/KN65_english.pdf
  3. 3.
    Kokoro Company Ltd. (2007) Kokoro news No 67. Retrieved 20 April 2012. http://www.kokoro-dreams.co.jp/english/news/KN67_english.pdf
  4. 4.
    O’Shea G (2009) PUPILS are getting a unique lesson—from the world’s first ROBOT teacher. Retrieved 20 April 2012. http://www.thesun.co.uk/sol/homepage/news/article2294068.ece?OTC-RSS&ATTR=News
  5. 5.
    Guizzo E (2010) The man who made a copy of himself. IEEE Spectr 47(4):45–56. doi: 10.1109/MSPEC.2010.5434851 Google Scholar
  6. 6.
    Pioggia G, Sica ML, Ferro M, Igliozzi R, Muratori F, Ahluwalia A, Rossi DD (2007) Human-robot interaction in autism: FACE an android-based social therapy. In: Proceedings of the international workshop on robot and human interactive communication (RO-MAN), Jeju, Korea, pp 605–612. doi: 10.1109/ROMAN.2007.4415156 Google Scholar
  7. 7.
    Lin CY, Tseng CK, Teng WC, Lee WC, Kuo CH, Gu HY, Chung KL, Fahn CS (2009) The realization of robot theater-humanoid robots and theatric performance. In: Proceedings of the international conference on advanced robotics (ICRA), Munich, Germany, pp 1–6 Google Scholar
  8. 8.
    Guizzo E (2010) Geminoid F gets job as robot actress. Retrieved 20 April 2012. http://spectrum.ieee.org/automaton/robotics/humanoids/geminoid-f-takes-the-stage
  9. 9.
    Mori M (1970) The uncanny valley. Energy 7(4):33–35 (in Japanese) Google Scholar
  10. 10.
    Mori M, Translated by MacDorman KF, Minato T (2005) The uncanny valley. Retrieved 20 April 2012. http://www.android-science.com/theuncannyvalley/proceedings2005/uncannyvalley.html
  11. 11.
    Ekman P, Friesen WV (1978) Facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto Google Scholar
  12. 12.
    Hara F, Kobayashi H (1996) Face robot able to recognize and produce facial expression. In: Proceedings of IEEE international conference on intelligent robots and systems (IROS), Osaka, Japan, pp 1600–1607 CrossRefGoogle Scholar
  13. 13.
    Hanson D, Pioggia G, Dinelli S, Francesco FD, Francesconi R, Rossi DD (2002) Bio-inspired facial expression interfaces for emotive robots. In: Proceedings of the conference on artificial intelligence (AAAI), Menlo Park, CA, USA. Technical report WS-02-18 Google Scholar
  14. 14.
    Ogiso A, Kurokawa S, Yamanaka M, Imai Y, Takeno J (2005) Expression of emotion in robots using a flow of artificial consciousness. In: Proceedings of IEEE international symposium computational intelligence in robotics and automation (CIRA), Espoo, Finland, pp 421–426 Google Scholar
  15. 15.
    Hashimoto M, Yokogawa C, Sadoyama T (2006) Development and control of a face robot imitating human muscular structures. In: Proceedings of IEEE international conference on intelligent robots and systems (IROS), Beijing, China, pp 1855–1860 Google Scholar
  16. 16.
    Minato T, Shimada M, Ishiguro H, Itakura S (2004) Development of an android robot for studying on human-robot interaction. In: Lecture notes in artificial intelligence. Springer, New York, pp 424–434 Google Scholar
  17. 17.
    Lee DW, Lee TG, So BR, Choi M, Chin EC, Yang KW, Baek MH, Kim HS, Lee HG (2008) Development of an android for emotional expression. In: Proceedings of the world congress of international federation of automatic control (IFAC), Seoul, Korea, pp 4336–4337 Google Scholar
  18. 18.
    Berns K, Braum T (2005) Design concept of a human-like robot head. In: Proceedings of IEEE-RAS international conference on humanoid robots (humanoids), Tsukuba, Japan, pp 32–37. doi: 10.1109/ICHR.2005.1573541 Google Scholar
  19. 19.
    Nakaoka S, Kanehiro F, Miura K, Morisawa M, Fujiwara K, Kaneko K, Kajita S, Hirukawa H (2009) Creating facial motions of cybernetic human HRP-4C. In: Proceedings of IEEE-RAS international conference on humanoid robots (humanoids), Paris, France, pp 561–567 Google Scholar
  20. 20.
    Takanashi Y, Hatakeyama M, Kanno M (2007) Experiments on human facial expressions for improvement of simplified robot face. In: Proceedings of SICE annual conference (SICE), Takamatsu, Japan, pp 480–483. doi: 10.1109/SICE.2007.4421031 Google Scholar
  21. 21.
    Horn BK-P, Schunck BG (1981) Determining optical flow. Artif Intell 17(1–3):185–203 CrossRefGoogle Scholar
  22. 22.
    Mase K (1991) Recognition of facial expression from optical flow. IEICE Trans Inf Syst 74-D(10):3474–3483 Google Scholar
  23. 23.
    Yacoob Y, Davis LS (1996) Recognizing human facial expressions from long image sequences using optical flow. IEEE Trans Pattern Anal Mach Intell 18(6):636–642. doi: 10.1109/34.506414 CrossRefGoogle Scholar
  24. 24.
    Wu YT, Kanade T, Cohn J, Li CC (1998) Optical flow estimation using wavelet motion model. In: Proceedings of IEEE international conference on computer vision (ICCV), Bombay, India, pp 992–998 Google Scholar
  25. 25.
    Lien JJ-J, Kanade T, Cohn JF, Li CC (2000) Detection, tracking, and classification of action units in facial expression. Robot Auton Syst 31(3):131–146. doi: 10.1016/S0921-8890(99)00103-7 CrossRefGoogle Scholar
  26. 26.
    Kwon T (2008) A study on the classification of moving units for facial expression robot. Ph.D. dissertation, Kyushu University Google Scholar
  27. 27.
    Riaz Z, Mayer C, Beetz M, Radig B (2009) Model based analysis of face images for facial feature extraction. In: Lecture notes in computer science. Springer, Heidelberg, pp 99–106. doi: 10.1007/978-3-642-03767-2_12 Google Scholar
  28. 28.
    Jenkins GW, Kemnitz CP, Tortora GJ (2007) Anatomy and physiology: from science to life. Wiley, New York Google Scholar
  29. 29.
    Logistic Technology Co., Ltd. (2007) Retrieved 20 April, 2012. http://www.ltech-3d.com/ltech/home.php
  30. 30.
    Posdamer JL, Altschuler MD (1982) Surface measurement by space-encoded projected beam systems. Comput Graph Image Process 18(1):1–17 CrossRefGoogle Scholar
  31. 31.
    Dunn SM, Keizer RL, Yu J (1989) Measuring the area and volume of the human body with structured light. IEEE Trans Syst Man Cybern 19(6):1350–1364. doi: 10.1109/21.44059 CrossRefGoogle Scholar
  32. 32.
    Takamasu K, Uekawa T, Kawakami K, Ozono S, Furutani R (1993) Profile measurements using multi-gray-scale pattern projection. In: Proceedings of SPIE international symposium on measurement technology and intelligent. Instruments (ISMTII), Wuhan, China, pp 1093–1099 CrossRefGoogle Scholar
  33. 33.
    Chen CS, Hung YP, Chiang CC, Wu JL (1997) Range data acquisition using color structured lighting and stereo vision. Image Vis Comput 15(6):445–456. doi: 10.1016/S02628856(96)01148-1 CrossRefGoogle Scholar
  34. 34.
    Sinlapeecheewa C, Takamatsu K (2002) 3D profile measurement by color pattern projection and system calibration. In: Proceedings of IEEE international conference on industrial technology (ICIT), Bangkok, Thailand, pp 405–410 Google Scholar
  35. 35.
    Fu YY, Liu GH (2007) Research on color encoding structured light 3D measurement technology. Proceedings of SPIE international society for optical engineering. Beijing, China, p 683437-1-8. doi: 10.1117/12.757128 Google Scholar
  36. 36.
    3D Systems Inc. (1989) Stereo-lithography interface specification, 3D Systems Inc., Valencia, CA, USA Google Scholar
  37. 37.
    Murray JD, van Ryper W (1994) Encyclopedia of graphics file formats. O’Reilly, Sebastopol Google Scholar
  38. 38.
    Besl PJ, McKay ND (1992) A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 14(2):239–256. doi: 10.1109/34.121791 CrossRefGoogle Scholar
  39. 39.
    Wu YJ, Pan G, Wu ZH (2003) Face authentication based on multiple profiles extracted from range data. In: Proceedings of the international conference on audio- and video-based biometric person authentication (AVBPA), Guildford, UK, pp 515–522 CrossRefGoogle Scholar
  40. 40.
    Lu X, Colbry D, Jain AK (2004) Three-dimensional model based face recognition. In: Proceedings of the international conference on pattern recognition (ICPR), Cambridge, UK, pp 362–366. doi: 10.1109/ICPR.2004.1334127 Google Scholar
  41. 41.
    Al-Osaimi FR, Bennamoun M, Mian A (2008) Expression invariant non-rigid 3D face recognition: a robust approach to expression aware morphing. In: Proceedings of the international symposium on 3D data processing, visualization and transmission (3DPVT), Atlanta, GA, USA, pp 19–26 Google Scholar
  42. 42.
    Conde C, Serrano A, Cabello E (2006) Multimodal 2D, 2.5D and 3D face verification. In: Proceedings of IEEE international conference on image processing (ICIP), Atlanta, GA, USA, pp 2061–2064. doi: 10.1109/ICIP.2006.312863 Google Scholar
  43. 43.
    Fookes C, Mamic G, McCool C, Sridharan S (2008) Normalization of 3D face data. In: Proceedings of the international conference on digital image computing: techniques and applications (DICTA), Canberra, Australia, pp 124–129 Google Scholar
  44. 44.
    Sun Y, Yin L (2008) Automatic pose estimation of 3D facial models. In: Proceedings of IEEE international conference on pattern recognition (ICPR), Tampa, FL, USA, pp 1–4 Google Scholar
  45. 45.
    Hu P, Cao W, Li H, Lin Z (2010) A 3D face coordinate system for expression-changed pose alignment. In: Proceedings of the international conference on image and graphics (ICIG). Xi’an, China, pp 847–852. doi: 10.1109/ICIG.2009.133 Google Scholar
  46. 46.
    Mathews JH, Fink KD (2004) Neld-Mead method. In: Numerical methods using MATLAB. Prentice-Hall, Upper Saddle River, pp 430–434 Google Scholar
  47. 47.
    Lin CY, Cheng LC, Tseng CK (2009) The development of mimetic engineering for theatric android head. In: Proceedings of the international conference on service and interactive robotics (SIRCon), Taipei, Taiwan, vol #38 Google Scholar
  48. 48.
    Chiu YT (2009) Thomas and Janet: first kissing humanoid robots. Retrieved 20 April. 2012. http://spectrum.ieee.org/automaton/robotics/robotics-software/thomas-and-janet-first-kissing-humanoid-robots
  49. 49.
    Lin CY, Tseng CK, Gu HY, Chung KL, Fahn CS, Lu KJ, Chang CC (2008) In: Proceedings of IEEE-RAS international conference on humanoid robots (humanoids), Daejeon, Korea, pp 454–461. doi: 10.1109/ICHR.2008.4755994 Google Scholar
  50. 50.
    Lin CY, Cheng LC, Tseng CK, Gu HY, Chung KL, Fahn CS, Lu KJ, Chang CC (2011) A face robot for autonomous simplified musical notation reading and singing. Robot Auton Syst 59(11):943–953. doi: 10.1016/j.robot.2011.07.001 CrossRefGoogle Scholar
  51. 51.
    Dailey MN, Cottrell GW, Padgett C, Adolphs R (2002) EMPATH: a neural network that categorizes facial expressions. J Cogn Neurosci 14(8):1158–1173. doi: 10.1162/089892902760807177 CrossRefGoogle Scholar
  52. 52.
    Aviezer H, Hassin R, Ryan J, Grady G, Susskind J, Anderson A, Moscovitch M, Bentin S (2008) Angry, disgusted or afraid? Psychol Sci 19(7):724–732. doi: 10.1111/j.1467-9280.2008.02148.x CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2012

Authors and Affiliations

  1. 1.National Taiwan University of Science and TechnologyTaipeiTaiwan

Personalised recommendations