Skip to main content

Extracting Interaction Cues: Focus of Attention, Body Pose, and Gestures

  • Chapter
Book cover Computers in the Human Interaction Loop

Abstract

Studies in social psychology [7] have experimentally validated the common feeling that nonverbal behavior, including, but not limited to, gaze and facial expressions, is extremely significant in human interactions. Proxemics [4] describes the social aspects of distance between interacting individuals. This distance is an indicator of the interactions that occur and provides information valuable to understanding human relationships.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. P. Chippendale. Towards automatic body language annotation. In 7th IEEE InternationalConference on Automatic Face and Gesture Recognition, FG06 pages 487–492, Southampton, UK, Apr. 2006.

    Google Scholar 

  2. P. Chippendale and O. Lanz. Optimised meeting recording and annotation using real-time video analysis. In 5th Joint Workshop on Machine Learning and Multimodal Interaction, MLMI08, Utrecht, The Netherlands, Sept. 2008.

    Google Scholar 

  3. J. W. Davis. Hierarchical motion history images for recognizing human motion. In IEEE Workshop on Detection and Recognition of Events in Video, page 39, 2001.

    Google Scholar 

  4. E. T. Hall. The Hidden Dimension: Man’s Use of Space in Public and Private. Bodley Head, London, 1969.

    Google Scholar 

  5. B. L. M. Zancanaro and F. Pianesi. Automatic detection of group functional roles in face to face interactions. In Proceedings of the International Conference on Multimodal Interfaces, ICMI06, pages 28–34, 2006.

    Google Scholar 

  6. M.Voit and R.Stiefelhagen. Tracking head pose and focus of attention with multiple farfield cameras. In International Conference on Multimodal Interfaces - ICMI 2006, Banff, Canada, Nov. 2006.

    Google Scholar 

  7. K. Parker. Speaking turns in small group interaction: A context-sensitive event sequence model. Journal of Personality and Social Psychology, 54(6), 1988.

    Google Scholar 

  8. S. Phung, A. Bouzerdoum, and D. Chai. A novel skin colour model in ycbcr colour space and its application to human face detection. In International Conference on Image Processing 2002, volume 1, pages 289–292, Sept. 2002.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag London Limited

About this chapter

Cite this chapter

Lanz, O., Brunelli, R., Chippendale, P., Voit, M., Stiefelhagen, R. (2009). Extracting Interaction Cues: Focus of Attention, Body Pose, and Gestures. In: Waibel, A., Stiefelhagen, R. (eds) Computers in the Human Interaction Loop. Human–Computer Interaction Series. Springer, London. https://doi.org/10.1007/978-1-84882-054-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-1-84882-054-8_9

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84882-053-1

  • Online ISBN: 978-1-84882-054-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics