Human Behavior Recognition for an Intelligent Video Production System

  • Motoyuki Ozeki
  • Yuichi Nakamura
  • Yuichi Ohta
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2532)

Abstract

We propose a novelframew ork for automated video capturing and production for desktop manipulations. We focus on the system’s ability to select relevant views by recognizing types of human behavior. Using this function, the obtained videos direct the audience’s attention to the relevant portions of the video and enable more effective communication. We first discuss significant types of human behavior that are commonly expressed in presentations, and propose a simple and highly precise method for recognizing them. We then demonstrate the efficacy of our system experimentally by recording presentations in a desktop manipulation.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Ozeki, Y. Nakamura, and Y. Ohta, “Camerawork for intelligent video production-capturing desktop manipulations,” Proc. ICME, pp. 41–44, 2001.Google Scholar
  2. 2.
    M. Ozeki, M. Itoh, Y. Nakamura, and Y. Ohta, “Tracking hands and objects for an intelligent video production system,” Proc. ICPR, pp. 1011–1014, 2002.Google Scholar
  3. 3.
    L. He et al., “Auto-summarization of audio-video presentations,” Proc.ACM Multimedia, pp. 489–498, 1999.Google Scholar
  4. 4.
    S. Mukhopadhyay and B. Smith, “Passive capture and structuring of lectures,” Proc.ACM Multimedia, pp. 477–487, 1999.Google Scholar
  5. 5.
    Y. Kameda, M. Minoh, et al., “A study for distance learning service-tide project-,” Proc. International Conference on Multimedia and Expo, pp. 1237–1240, 2000.Google Scholar
  6. 6.
    Y. Kameda, M. Mihoh, et al., “A live video imaging method for capturing presentation information in distance learning,” Proc.International Conference on Multimedia and Expo, 2000.Google Scholar
  7. 7.
    A. Bobick, “Movement, activity, and action,” MIT Media Lab Preceptual Computing Section, vol. TR-413, 1997.Google Scholar
  8. 8.
    A. Bobick and C. Pinhanez, “Controlling view-based algorithms using approximate world models and action information,” Proc. Conference on Computer Vision and Pattern Recognition, pp. 955–961, 1997.Google Scholar
  9. 9.
    P. Ekman and W. Friesen, “The repertoire of nonverbalb ehavior: Categories, origins,usage,and coding,” Semiotica, vol. 1, pp. 49–98, 1969.Google Scholar
  10. 10.
    Y. Nakamura et al., “MMID: Multimodal multi-view integrated database for human behavior understanding,” Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pp. 540–545, 1998.Google Scholar
  11. 11.
    K. Ohkushi, T. Nakayama, and T. Fukuda, Evaluation Techniques for Image and Tone Quality (in Japanese), chapter 2.5, SHOKODO, 1991.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Motoyuki Ozeki
    • 1
  • Yuichi Nakamura
    • 1
  • Yuichi Ohta
    • 1
  1. 1.IEMS, University of TsukubaJapan

Personalised recommendations