Advertisement

A Study of Multi-modal and NUI/NUX Framework

  • Gwanghyung Lee
  • Dongkyoo Shin
  • Dongil Shin
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 309)

Abstract

Up to now, typical motion recognition methods have used markers. The recognition methods were to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. Methods using markers could store and utilize accurate values in the database but as ubiquitous era comes, there was no time enough to handle the preparation process for recognition. To compensate for this problem, we don’t use markers and implement real time motion recognition framework using Kinect camera. Especially the framework of hand mouse and fingers recognition framework is implemented. Also, we implemented for anyone to handle NUI/NUX framework easily and intuitively.

Keywords

Facial Expression Recognition Voice Recognition Motion Recognition Kinect Camera Gesture Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chaing, I.-T., Tsai, J.-C.: Using Xbox360 Kinect Games on Enhancing Visual Performance Skills on Institutionalized Older Adults with Wheelchairs. In: Fourth IEEE Int’l Conference on Digital Game and Intelligent Toy Enhanced Learning, pp. 263–267 (2012)Google Scholar
  2. 2.
    Shiratuddinm, M.F., Won, K.W.: Non-Contact Multi-Hand Gestures Interaction Techniques for Architectural Design in a virtual Environment. In: International Conference on IT & Multimedia at UNITEN (ICIMU 2011), Malaysia (November 2011)Google Scholar
  3. 3.
    Ohya, J., Kitamura, Y., et al.: Real-Time Reproduction of 3D Human Images in Virtual Space Teleconferencing. In: Proc. of 1993 IEEE Virtual Reality Annual Int. Symp., pp. 408–414 (1993)Google Scholar
  4. 4.
    Bau, O., Mackay, W.E.: OctoPocus: A Dynamic Guide for Learning Gesture-Based Command Sets. In: UIST 2008 (2008)Google Scholar
  5. 5.
    Henrique, C., Forster, Q.: Design of Gesture Vocabularies through Analysis of Recognizer Performance in Gesture Space. In: Intelligent Systems Design and Applications, pp. 641–646 (2007)Google Scholar
  6. 6.
    Bolt Richard, A.: Put-That-There: Voice and Gesture at the Graphics Interface. In: International Conference on Computer Graphics and Interactive Techniques, pp. 262–270. Association for Computer Machinery (1980)Google Scholar
  7. 7.
    Roto, V., Law, E., Vermeeren, A., Hoonhout, J.: “User Experience White Paper”, Bringing clarity to the concept of user experience. Result from Dagstuhl Seminar on Demarcating User Experience (Feburary 11, 2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Department of Computer EngineeringSejong UniversityGwangjin-Gu, SeoulSouth Korea

Personalised recommendations