Advertisement

Construction of Symbolic Representation from Human Motion Information

  • Yutaka Araki
  • Daisaku Arita
  • Rin-ichiro Taniguchi
  • Seiichi Uchida
  • Ryo Kurazume
  • Tsutomu Hasegawa
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4252)

Abstract

In general, avatar-based communication has a merit that it can represent non-verbal information. The simplest way of representing the non-verbal information is to capture the human action/motion by a motion capture system and to visualize the received motion data through the avatar. However, transferring raw motion data often makes the avatar motion unnatural or unrealistic because the body structure of the avatar is usually a bit different from that of the human beings. We think this can be solved by transferring the meaning of motion, instead of the raw motion data, and by properly visualizing the meaning depending on characteristics of the avatar’s function and body structure. Here, the key issue is how to symbolize the motion meanings. Particularly, the problem is what kind of motions we should symbolize. In this paper, we introduce an algorithm to decide the symbols to be recognized referring to accumulated communication data, i.e., motion data.

Keywords

Motion Data Independent Component Analysis Symbolic Representation Motion Information Label Pattern 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Arita, D., Yoshimatsu, H., Hayama, D., Kunita, M., Taniguchi, R.: Real-time human proxy: an avatar-based interaction system. In: CD-ROM Proc. of Int. Conf. on Multimedia and Expo. (2004)Google Scholar
  2. 2.
    Mori, A., Uchida, S., Kurazume, R., Taniguchi, R., Hasegawa, T., Sakoe, H.: Early recognition of gestures. In: Proc. of 11th Korea-Japan Joint Workshop on Frontiers of Computer Vision, pp. 80–85 (2005)Google Scholar
  3. 3.
    Lin, J., Keogh, E., Lonardi, S., Patel, P.: Finding motifs in time series. In: Proc. of the 2nd Workshop on Temporal Data Mining, pp. 53–68 (2002)Google Scholar
  4. 4.
    Chohen, I., Tian, Q., Zhou, X.S., Huang, T.S.: Feature selection using principal feature analysis. In: Int. Conf. on Image Processing (2002), http://citeseer.ist.psu.edu/cohen02feature.html
  5. 5.
    Grünwald, P.: A tutorial introduction to the minimum description length principle. In: Grünwald, P., Myung, I.J., Pitt, M. (eds.) Advances in Minimum Description Length: Theory and Applications, MIT Press, Cambridge (2005)Google Scholar
  6. 6.
    Tanaka, Y., Iwamoto, K., Uehara, K.: Discovery of time-series motif from multi-dimensional data based on MDL principle. Machine Learning 58, 269–300 (2005)MATHCrossRefGoogle Scholar
  7. 7.
    Lin, J., Keogh, E., Lonardi, S., Chiu, B.: A symbolic representation of time series with implications for streaming algorithms. In: Proc. of 8th ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, pp. 2–11 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Yutaka Araki
    • 1
  • Daisaku Arita
    • 1
  • Rin-ichiro Taniguchi
    • 1
  • Seiichi Uchida
    • 1
  • Ryo Kurazume
    • 1
  • Tsutomu Hasegawa
    • 1
  1. 1.Department of Intelligent SystemsKyushu UniversityFukuokaJapan

Personalised recommendations