Advertisement

Extending the Perceptual User Interface to Recognise Movement

  • Richard Green
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3101)

Abstract

Perceptual User Interfaces (PUIs) automatically extract user input from natural and implicit components of human activity such as gestures, direction of gaze, facial expression and body movement. This paper presents a Continuous Human Movement Recognition (CHMR) system for recognising a large range of specific movement skills from continuous 3D full-body motion. A new methodology defines an alphabet of dynemes, units of full-body movement skills, to enable recognition of diverse skills. Using multiple Hidden Markov Models, the CHMR system attempts to infer the movement skill that could have produced the observed sequence of dynemes. This approach enables the CHMR system to track and recognise hundreds of full-body movement skills from gait to twisting summersaults. This extends the perceptual user interface beyond frontal posing or only tracking one hand to recognise and understand full-body movement in terms of everyday activities.

Keywords

Hide Markov Model Motion Vector Context Model American Sign Skill Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Badler, N.I., Phillips, C.B., Webber, B.L.: Simulating humans. Oxford University Press, New York (1993)zbMATHGoogle Scholar
  2. 2.
    Brand, M., Kettnaker, V.: Discovery and segmentation of activities in video. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8) (August 2000)Google Scholar
  3. 3.
    Bregler, C.: Learning and recognizing human dynamics in video sequences. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR (1997)Google Scholar
  4. 4.
    Bregler, C., Malik, J.: Tracking people with twists and exponential maps. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 8–15 (1998)Google Scholar
  5. 5.
    Cham, T.J., Rehg, J.M.: A multiple hypothesis approach to figure tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, vol. 1, pp. 239–245 (1999)Google Scholar
  6. 6.
    Drummond, T., Cipolla, R.: Real-time tracking of highly articulated structures in the presence of noisy measurements. In: IEEE International Conference on Computer Vision, ICCV, vol. 2, pp. 315–320 (2001)Google Scholar
  7. 7.
    Gavrila, D.M., Davis, L.A.: 3-D model-based tracking of humans in action: a multi-view approach. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 73–80 (1996)Google Scholar
  8. 8.
    Goncalves, L., Di Bernardo, E., Ursella, E., Perona, P.: Monocular tracking of the human arm in 3D. In: IEEE International Conference on Computer Vision, ICCV, pp. 764–770 (1995)Google Scholar
  9. 9.
    Grobel, K., Assam, M.: Isolated sign language recognition using hidden Markov models. In: IEEE International Conference on Systems, Man and Cybernetics, Orlando, pp. 162–167 (1997)Google Scholar
  10. 10.
    Herbison-Evans, D., Green, R.D., Butt, A.: Computer Animation with NUDES in Dance and Physical Education. Australian Computer Science Communications 4(1), 324–331 (1982)Google Scholar
  11. 11.
    Hogg, D.C.: Model-based vision: A program to see a walking person. Image and Vision Computing 1(1), 5–20 (1983)CrossRefGoogle Scholar
  12. 12.
    Hutchinson-Guest, A.: Choreo-Graphics; A Comparison of Dance Notation Systems from the Fifteenth Century to the Present. Gordon and Breach, New York (1989)Google Scholar
  13. 13.
    Jelinek, F.: Statistical Methods for Speech Recognition. MIT Press, Cambridge (1999)Google Scholar
  14. 14.
    Ju, S.X., Black, M.J., Yacoob, Y.: Cardboard people: A parameterized model of articulated motion. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 38–44 (1996)Google Scholar
  15. 15.
    Kadous, M.W.: Machine recognition of Auslan signs using PowerGloves: Towards large-lexicon recognition of sign language. In: Workshop on the Integration of Gesture in Language and Speech, WIGLS, pp. 165–174. Applied Science and Engineering Laboratories, Newark (1996)Google Scholar
  16. 16.
    Kakadiaris, I., Metaxas, D.: Model-based estimation of 3D human motion with occlusion based on active multi-viewpoint selection. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 81–87 (1996)Google Scholar
  17. 17.
    Leventon, M.E., Freeman, W.T.: Bayesian estimation of 3-d human motion from an image sequence, TR-98-06, Mitsubishi Electric Research Lab, Cambridge (1998)Google Scholar
  18. 18.
    Liang, R.H., Ouhyoung, M.: A real-time continuous gesture recognition system for sign language. In: Third International Conference on Automatic Face and Gesture Recognition, Nara, pp. 558–565 (1998)Google Scholar
  19. 19.
    Liddell, S.K., Johnson, R.E.: American Sign Language: the phonological base. Sign Language Studies 64, 195–277 (1989)Google Scholar
  20. 20.
    Liebowitz, D., Carlsson, S.: Uncalibrated motion capture exploiting articulated structure constraints. In: IEEE International Conference on Computer Vision, ICCV (2001)Google Scholar
  21. 21.
    Moeslund, T.B., Granum, E.: A survey of computer vision-based human motion capture. Computer Vision and Image Understanding 18, 231–268 (2001)CrossRefGoogle Scholar
  22. 22.
    Nam, Y., Wohn, K.Y.: Recognition of space-time hand-gestures using hidden Markov model. In: ACM Symposium on Virtual Reality Software and Technology (1996)Google Scholar
  23. 23.
    Pentland, A., Horowitz, B.: Recovery of nonrigid motion and structure. IEEE Transactions on PAMI 13, 730–742 (1991)Google Scholar
  24. 24.
    Pheasant, S.: Bodyspace. Anthropometry, Ergonomics and the Design of Work. Taylor & Francis, Abington (1996)Google Scholar
  25. 25.
    Plänkers, R., Fua, P.: Articulated soft objects for video-based body modeling. In: IEEE International Conference on Computer Vision, ICCV, vol. 1, pp. 394–401 (2001)Google Scholar
  26. 26.
    Rehg, J.M., Kanade, T.: Model-based tracking of self-occluding articulated objects. In: Fifth International Conference on Computer Vision, pp. 612–617 (1995)Google Scholar
  27. 27.
    Rittscher, J., Blake, A., Roberts, S.J.: Towards the automatic analysis of complex human body motions. Image and Vision Computing 20(12), 905–916 (2002)CrossRefGoogle Scholar
  28. 28.
    Rohr, K.: Towards model-based recognition of human movements in image sequences. CVGIP - Image Understanding 59(1), 94–115 (1994)CrossRefGoogle Scholar
  29. 29.
    Rosales, R., Sclaroff, S.: Inferring body pose without tracking body parts. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2000)Google Scholar
  30. 30.
    Schlenzig, J., Hunter, E., Jain, R.: Recursive identification of gesture inputers using hidden Markov models. In: Second Annual Conference on Applications of Computer Vision, pp. 187–194 (1994)Google Scholar
  31. 31.
    Starner, T., Pentland, A.: Real-time American Sign Language recognition from video using Hidden Markov Models, Technical Report 375, MIT Media Laboratory (1996)Google Scholar
  32. 32.
    Stokoe, W.C.: Sign Language Structure: An Outline of the Visual Communication System of the American Deaf. Studies in Linguistics: Occasional Papers 8. Linstok Press, Silver Spring (1960) (revised 1978)Google Scholar
  33. 33.
    Tamura, S., Kawasaki, S.: Recognition of sign language motion images. Pattern Recognition 31, 343–353 (1988)CrossRefGoogle Scholar
  34. 34.
    Taylor, C.J.: Reconstruction of articulated objects from point correspondences in a single articulated image. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 586–591 (2000)Google Scholar
  35. 35.
    Vogler, C., Metaxas, D.: Adapting hidden Markov models for ASL recognition by using three-dimensional computer vision methods. In: IEEE International Conference on Systems, Man and Cybernetics, Orlando, pp. 156–161 (1997)Google Scholar
  36. 36.
    Vogler, C., Metaxas, D.: ASL recognition based on a coupling between HMMs and 3D motion analysis. In: IEEE International Conference on Computer Vision, Mumbai, pp. 363–369 (1998)Google Scholar
  37. 37.
    Vogler, C., Metaxas, D.: Toward scalability in ASL recognition: breaking down signs into phonemes. In: Gesture Workshop 1999, Gif-sur-Yvette (1999)Google Scholar
  38. 38.
    Wachter, S., Nagel, H.: Tracking of persons in monocular image sequences. Computer Vision and Image Understanding 74(3), 174–192 (1999)CrossRefGoogle Scholar
  39. 39.
    Waldron, M.B., Kim, S.: Isolated ASL sign recognition system for deaf persons. IEEE Transactions on Rehabilitation Engineering 3(3), 261–271 (1995)CrossRefGoogle Scholar
  40. 40.
    Wang, J., Lorette, G., Bouthemy, P.: Analysis of human motion: A modelbased approach. In: Scandinavian Conference on Image Analysis, SCIA, vol. 2, pp. 1142–1149 (1991)Google Scholar
  41. 41.
    Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: Real-time tracking of the human body. IEEE Transactions on PAMI 19(7), 780–785 (1997)Google Scholar
  42. 42.
    Yamato, J., Ohya, J., Ishii, K.: Recognizing human action in time-sequential images using hidden Markov models. In: IEEE International Conference on Computer Vision, ICCV, pp. 379–385 (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Richard Green
    • 1
  1. 1.Computer ScienceUniversity of CanterburyChristchurchNew Zealand

Personalised recommendations