Skip to main content

Body Movement Analysis and Recognition

  • Chapter
  • First Online:

Part of the book series: Human–Computer Interaction Series ((HCIS))

Abstract

In this chapter, a nonverbal way of communication for human–robot interaction by understanding human upper body gestures will be addressed. The human–robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot with natural body language. The robot can understand the meaning of human upper body gestures and express itself by using a combination of body movements, facial expressions, and verbal language. A set of 12 upper body gestures is involved for communication. Human–object interactions are also included in these gestures. The gestures can be characterized by the head, arm, and hand posture information. CyberGlove II is employed to capture the hand posture. This feature is combined with the head and arm posture information captured from Microsoft Kinect. This is a new sensor solution for human-gesture capture. Based on the body posture data, an effective and real-time human gesture recognition method is proposed. For experiments, a human body gesture dataset was built. The experimental results demonstrate the effectiveness and efficiency of the proposed approach.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.aldebaran-robotics.com/

  2. 2.

    http://imi.ntu.edu.sg/Pages/Home.aspx

  3. 3.

    http://thrift.apache.org/

  4. 4.

    The source code is available at http://www.cse.wustl.edu/~kilian/code/lmnn/lmnn.html

References

  1. Adamo-Villani N, Heisler J, Arns L (2007) Two gesture recognition systems for immersive math education of the deaf. In: Proceedings of the first international conference on immersive telecommunications (ICIT 2007). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), p 9

    Google Scholar 

  2. Beck A, Cañamero L, Hiolle A, Damiano L, Cosi P, Tesser F, Sommavilla G (2013) Interpretation of emotional body language displayed by a humanoid robot: a case study with children. Int J Soc Robot: 1–10

    Google Scholar 

  3. Belongies S, Malik J, Puzicha J (2002) Shape matching and object recognition using shape contexts. IEEE Trans Pattern Anal Mach Int 24(4):509–522

    Article  Google Scholar 

  4. Berman S, Stern H (2012) Sensors for gesture recognition systems. IEEE Trans Syst Man Cybern: Appl Rev 42(3):277–290

    Article  Google Scholar 

  5. Brethes L, Menezes P, Lerasle F, Hayet J (2004) Face tracking and hand gesture recognition for human-robot interaction. In: Proceedings of IEEE conference on robotics and automation (ICRA 2004), IEEE, vol 2. pp 1901–1906

    Google Scholar 

  6. Cañamero L, Fredslund J (2001) I show you how i like you—can you read it in my face? [robotics]. IEEE Trans Syst Man Cybern: Syst Hum 31(5):454–459

    Article  Google Scholar 

  7. Cassell J et al. (2000) Nudge nudge wink wink: elements of face-to-face conversation for embodied conversational agents. Embodied conversational agents, pp 1–27

    Google Scholar 

  8. Chopra S, Hadsell R, LeCun Y (2005) Learning a similarity metric discriminatively, with application to face verification. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR 2005), vol 1. pp 539–546

    Google Scholar 

  9. Cover T, Hart P (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13(1):21–27

    Article  MATH  Google Scholar 

  10. Dautenhahn K (2007) Socially intelligent robots: dimensions of human-robot interaction. Philos Trans Royal Soc B: Biol Sci 362(1480):679–704

    Google Scholar 

  11. Faber F, Bennewitz M, Eppner C, Gorog A, Gonsior C, Joho D, Schreiber M, Behnke S (2009) The humanoid museum tour guide robotinho. In: Proceedings of IEEE symposium on robot and human interactive communication (RO-MAN 2009), IEEE, pp 891–896

    Google Scholar 

  12. Fisher RA (1936) The use of multiple measures in taxonomic problems. Ann Eugenics 7:179–188

    Article  Google Scholar 

  13. Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42(3):143–166

    Google Scholar 

  14. Goodrich MA, Schultz AC (2007) Human-robot interaction: a survey. Found Trends Hum-Comput Interact 1(3):203–275

    Google Scholar 

  15. Immersion (2010) Cyberglove II specfications

    Google Scholar 

  16. Jolliffe IT (1986) Principal component analysis. Springer, London

    Google Scholar 

  17. Krämer NC, Tietz B, Bente G (2003) Effects of embodied interface agents and their gestural activity. In: Intelligent virtual agents. Springer, London, pp 292–300

    Google Scholar 

  18. Lu G, Shark L-K, Hall G, Zeshan U (2012) Immersive manipulation of virtual objects through glove-based hand gesture interaction. Virtual Reality 16(3):243–252

    Google Scholar 

  19. Mehrabian A (1971) Silent messages

    Google Scholar 

  20. Müller M, Röder T (2006) Motion templates for automatic classification and retrieval of motion capture data. In: Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on computer animation (SCA 2006), Eurographics Association, pp 137–146

    Google Scholar 

  21. Nickel K, Stiefelhagen R (2007) Visual recognition of pointing gestures for human-robot interaction. Image Vis Comput 25(12):1875–1884

    Google Scholar 

  22. Osborne JW, Costello AB (2004) Sample size and subject to item ratio in principal components analysis. Pract Assess, Res Eval 9(11):8

    Google Scholar 

  23. Perzanowski D, Schultz AC, Adams W, Marsh E, Bugajska M (2001) Building a multimodal human-robot interface. IEEE Intell Syst 16(1):16–21

    Google Scholar 

  24. Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R, Kipman A, Blake A (2011) Real-time human pose recognition in parts from single depth images. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR 2011), pp 1297–1304

    Google Scholar 

  25. Smith LB, Breazeal C (2007) The dynamic lift of developmental process. Dev Sci 10(1):61–68

    Google Scholar 

  26. Spiliotopoulos D, Androutsopoulos I, Spyropoulos CD (2001) Human-robot interaction based on spoken natural language dialogue. In: Proceedings of the European workshop on service and humanoid robots, pp 25–27

    Google Scholar 

  27. Stiefelhagen R, Ekenel HK, Fugen C, Gieselmann P, Holzapfel H, Kraft F, Nickel K, Voit M, Waibel A (2007) Enabling multimodal human-robot interaction for the karlsruhe humanoid robot. IEEE Trans Robot 23(5):840–851

    Google Scholar 

  28. Stiefelhagen R, Fugen C, Gieselmann R, Holzapfel H, Nickel K, Waibel A (2004) Natural human-robot interaction using speech, head pose and gestures. In: Proceedings of IEEE conference on intelligent robots and systems (IROS 2004), IEEE, vol 3, pp 2422–2427

    Google Scholar 

  29. Teleb H, Chang G (2012) Data glove integration with 3d virtual environments. In: Proceeedings of international conference on systems and informatics (ICSAI 2012), IEEE, pp 107–112

    Google Scholar 

  30. Waldherr S, Romero R, Thrun S (2000) A gesture based interface for human-robot interaction. Auton Robots 9(2):151–173

    Google Scholar 

  31. Wang J, Liu Z, Wu Y, Yuan J (2012) Mining actionlet ensemble for action recognition with depth cameras. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR 2012), pp 1290–1297

    Google Scholar 

  32. Wang J, Liu Z, Chorowski J, Chen Z, Wu Y (2012) Robust 3d action recognition with random occupancy patterns. In: Proceedings of European conference on computer vision (ECCV). Springer, London, pp 872–885

    Google Scholar 

  33. Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10:207–244

    MATH  Google Scholar 

  34. Xiao Y, Cao Z, Zhuo W (2011) Type-2 fuzzy thresholding using glsc histogram of human visual nonlinearity characteristics. Opt. Express 19(11):10656–10672

    Google Scholar 

  35. Xiao Y, Cao Z, Yuan J (2014) Entropic image thresholding based on GLGM histogram. Pattern Recogn Lett 40:47–55

    Google Scholar 

  36. Xiao Y, Wu J, Yuan J (2014) mCENTRIST: a multi-channel feature generation mechanism for scene categorization. IEEE Trans Image Process 23(2):823–836

    Google Scholar 

  37. Xiao Y, Yuan J, Thalmann D (2013) Human-virtual human interaction by upper body gesture understanding. In: Proceedings of the 19th ACM symposium on virtual reality software and technology (VRST 2013), pp 133–142. ACM, Las Vegas

    Google Scholar 

  38. Xiao Y, Zhang Z, Beck A, Yuan J, Thalmann D (2014) Human-robot interaction by understanding upper body gestures. Presence: teleoperators and virtual environments (Accepted)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Xiao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Xiao, Y., Liang, H., Yuan, J., Thalmann, D. (2016). Body Movement Analysis and Recognition. In: Magnenat-Thalmann, N., Yuan, J., Thalmann, D., You, BJ. (eds) Context Aware Human-Robot and Human-Agent Interaction. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-19947-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-19947-4_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-19946-7

  • Online ISBN: 978-3-319-19947-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics