Advertisement

Human Sensing

  • Yuichi NakamuraEmail author
Chapter

Abstract

Places such as homes, offices, workplaces, classrooms, conference rooms, and streets can be considered fields where humans act. The purpose of human sensing is to observe and analyze humans and the social interactions that occur among them in these fields in order to discover human activities and social systems, design and establish new social systems and environments, and develop new information media or artifacts. Technology for sensing humans has been developed in various fields such as medicine/physiology, engineering, psychology, and sociology. Some examples include media processing and artificial intelligence used for observing and automatically recognizing human intentions, human engineering for designing artifacts, and user interfaces (Knapp and Hall 1972; Wickens et al. 2004). By incorporating technologies used in these fields, we will consider how to observe and analyze complicated and multitiered phenomena in target fields as pluralistically as possible. This chapter is organized as follows: Sect. 3.1 describes the type of information to be collected, and Sect. 3.2 explains the details of each sensing technology. Section 3.3 contains examples of acquiring multifaceted data and browsing human activities. These examples demonstrate the latest information media technology. In Sect. 3.4, scenario examples are discussed. Note that sensing technologies for nature and biologging, described in  Chaps. 1 and  2, overlap with human sensing technologies. Hence, referring to those chapters is recommended.

Keywords

Face Detection Remote Communication Multiple Camera Omnidirectional Image Radio Frequency Identification Device 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Knapp, M., Hall, J. Nonverbal Communication in Human Interaction, Wadsworth, 1972Google Scholar
  2. Wickens, C. et al. Introduction to Human Factors Engineering. Pearson, 2004Google Scholar
  3. Doucet, A., De Freitas, N., Gordon, N.J.: Sequential Monte Carlo Methods in Practice, Springer, 2001Google Scholar
  4. Bradski, G., Kaehler, A.: Learning OpenCV: Computer Vision with the OpenCV Library, O’Reilly, 2008Google Scholar
  5. Ekman, P. Facial expression and emotion. American Psychologist 48, 384–392, 1993CrossRefGoogle Scholar
  6. Dornhege, G. et al. (ed.). Toward Brain-Computer Interfacing, MIT Press, 2007Google Scholar
  7. Ericsson, K., Simon, H. Protocol Analysis. MIT Press, Cambridge, Massachusetts, 1984Google Scholar
  8. Psathas, G. Conversation Analysis, Sage Publications, 1995Google Scholar
  9. Kubota, S., Nakamura, Y., Ohta, Y. Detecting Scenes of Attention from Personal View Records — Motion Estimation Improvements and Cooperative Use of a Surveillance Camera, Proc. IAPR Workshop on Machine Vision and Applications, 209–213, 2002Google Scholar
  10. Sumi, Y. et al. Collaborative capturing, interpreting and sharing of experiences. Personal and Ubiquitous Computing 11(4) 213–328, 2007CrossRefGoogle Scholar
  11. Kipp, M.: Anvil — A Generic Annotation. Tool for Multimodal Dialogue, Proc- 7th European Conf. on Speech, Communication and Technology (Eurospeech), pp. 1367–1370, 2001Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Academic Center for Computing and Media StudiesKyoto UniversityKyotoJapan

Personalised recommendations