Skip to main content

Abstract

When a person enters a room, he or she immediately develops a mental concept about “what is going on” in the room; for example, people may be working in the room, people may be engaged in a conversation, or the room may be empty. The CHIL services depend on just the same kind of semantic description, which is termed activity in the following. The “Connector” or the “Memory Jog”, for example, could provide support that is appropriate for the given context if it knew about the current activity at the user’s place. This kind of higher-level understanding of human interaction processes could then be used, e.g., for rating the user’s current availability in a certain situation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. F. Bobick and J. W. Davis. The recognition of human movement using temporal templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3):257–267, 2001.

    Article  Google Scholar 

  2. G. Bradski and J. Davis. Motion segmentation and pose recognition with motion history gradients. Machine Vision and Applications, 13(3):174–184, 2002.

    Article  Google Scholar 

  3. C. Canton-Ferrer, J. R. Casas, and M. Pardàs Human model and motion based 3D action recognition in multiple view scenarios (invited paper). In 14th European Signal Processing Conference, EUSIPCO, University of Pisa, Florence, Italy, 4–9 Sept. 2006.

    Google Scholar 

  4. S. Dockstader, M. Berg, and A. Tekalp. Stochastic kinematic modeling and feature extraction for gait analysis. IEEE Transactions on Image Processing, 12(8):962–976, 2003.

    Article  MathSciNet  Google Scholar 

  5. M. Hu. Visual pattern recognition by moment invariants. IEEE Transactions on Information Theory, 8(2):179–187, 1962.

    Article  Google Scholar 

  6. Y. A. Ivanov and A. F. Bobick. Recognition of visual activities and interactions by stochastic parsing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22:852–872, 2000.

    Article  Google Scholar 

  7. J. L. Landabaso and M. Pardas. Foreground regions extraction and characterization towards real-time object tracking. In Machine Learning for Multimodal Interaction (MLMI), LNCS 3869, pages 241–249. Springer, 2006.

    Google Scholar 

  8. K. Lari and S. Young. The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer, Speech and Language, 4:35–56, 1990.

    Article  Google Scholar 

  9. C. Lo and H. Don. 3-D oment forms: Their construction and application to object identification and positioning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(10):1053–1064, 1989.

    Article  Google Scholar 

  10. N. M. Oliver, B. Rosario, and A. Pentland. A Bayesian computer vision system for modeling human interactions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):831–843, 2000.

    Article  Google Scholar 

  11. R. Rosales. Recognition of human action using moment-based features. Boston University Computer Science Technical Report, BU, pages 98–120, 1998.

    Google Scholar 

  12. C. Stauffer and W. E. L. Grimson. Learning patterns of activity using real-time tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):747–757, 2000.

    Article  Google Scholar 

  13. C. Wojek, K. Nickel, and R. Stiefelhagen. Activity recognition and room level tracking in an office environment. In IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Heidelberg, Germany, Sept. 2006.

    Google Scholar 

  14. D. H. Younger. Recognition and parsing of context-free languages in time n3. Information and Control, 10:189–208, 1967.

    Article  MATH  Google Scholar 

  15. D. Zhang, D. Gatica-Perez, S. Bengio, and I. McCowan. Semi-supervised adapted HMMs for unusual event detection. In Computer Vision and Pattern Recognition, pages 611–618, 2005.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag London Limited

About this chapter

Cite this chapter

Nickel, K., Pardàs, M., Stiefelhagen, R., Canton, C., Landabaso, J.L., Casas, J.R. (2009). Activity Classification. In: Waibel, A., Stiefelhagen, R. (eds) Computers in the Human Interaction Loop. Human–Computer Interaction Series. Springer, London. https://doi.org/10.1007/978-1-84882-054-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-1-84882-054-8_11

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84882-053-1

  • Online ISBN: 978-1-84882-054-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics