Advertisement

Long-Range Hand Gesture Interaction Based on Spatio-temporal Encoding

  • Jaewon Kim
  • Gyuchull Han
  • Ig-Jae Kim
  • Hyounggon Kim
  • Sang Chul Ahn
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8028)

Abstract

We present a novel hand gesture interaction method which has a long-range working space (1m~5m) overcoming conventional approaches’ limitations in cost-performance dependency. Our camera-free interaction system is composed of a pair of lighting device and an instrumented glove with photosensor markers. The lighting devices spatiotemporally encode user’s interaction space via binary infrared light signals and markers’ 3D position at fingertips is tracked at high speed (250 Hz) and fair accuracy (5mm at 3m working distance). Each marker consisting of a photosensor array allows a wide sensing range and minimizes fingers’ self-occlusion. Experiment results demonstrate various applications where hand gestures are recognized as input commands to interact with digital information mimicking natural human hand gestures toward real objects. Our system has strengths in accuracy, speed, low price, and robustness comparing with conventional long-range interaction techniques. Ambiguity-free nature in marker recognition and little cost-performance dependency are additional advantages of our method.

Keywords

Graphic Object Lighting Device Tracking Speed Interaction Speed Demo Video 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hirsch, M., Lanman, D., Holtzman, H., Raskar, R.: BiDi screen: a thin, depth-sensing LCD for 3D interaction using light fields. ACM Transactions on Graphics 28 (2009)Google Scholar
  2. 2.
    Mistry, P., Maes, P., Chang, L.: WUW - Wear Ur World - A Wearable Gestural Interface. In: The CHI 2009 Extended Abstracts on Human Factors in Computing Systems (2009)Google Scholar
  3. 3.
    Wang, R., Popovic, J.: Real-time hand-tracking with a color glove. ACM SIGGRAPH (2009)Google Scholar
  4. 4.
    Ballan, L., Taneja, A., Gall, J., Van Gool, L., Pollefeys, M.: Motion capture of hands in action using discriminative salient points. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 640–653. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  5. 5.
    Kim, D., Hilliges, O., Izadi, S., Butler, A.: Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor. In: Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (2012)Google Scholar
  6. 6.
    Nii, H., Sugimoto, M., Inami, M.: Smart Light Ultra High Speed Projector for Spatial Multiplexing Optical Transmission. In: Procams Workshop (held with IEEE CVPR) (2005)Google Scholar
  7. 7.
    Raskar, R., Beardsley, P., Van Baar, J., Wang, Y., Dietz, P., Lee, J., Leigh, D., Willwacher, T.: RFIG Lamps: Interacting with a Self-describing World via Photosensing Wireless Tags and Projectors. ACM Transactions on Graphics (SIGGRAPH) 23 (2004)Google Scholar
  8. 8.
    Lee, J.C., Hudson, S.E., Summet, J.W., Dietz, P.H.: Moveable Interactive Projected Displays using Projector Based Tracking. In: ACM Symposium on User Interface Software and Technology (UIST), pp. 63–72 (2005)Google Scholar
  9. 9.
    Welch, G., Bishop, G.: SCAAT: Incremental Tracking with Incomplete Information. In: Proceedings of SIGGRAPH 1997, Computer Graphics Proceedings. Annual Conference Series (1997)Google Scholar
  10. 10.
    Kang, S., Tesar, D.: Indoor GPS Metrology System with 3D Probe for Precision Applications. In: Proceedings of ASME IMECE 2004 International Mechanical Engineering Congress and RD and D Expo (2004)Google Scholar
  11. 11.
    Kessler, D., Hodges, L., Walker, N.: Evaluation of the CyberGlove as a Whole-Hand Input Device. ACM Tran. on Computer-Human Interactions 2(4), 263–283 (1995)CrossRefGoogle Scholar
  12. 12.
    Raskar, R., Nii, H., Dedecker, B., Hashimoto, Y., Summet, J., Moore, D., Zhao, Y., Westhues, J., Dietz, P., Barnwell, J., Nayar, S., Inami, M., Bekaert, P., Noland, M., Branzoi, V., Bruns, E.: Prakash: lighting aware motion capture using photosensing markers and multiplexed illuminators. ACM Transactions on Graphics 26, 36 (2007)CrossRefGoogle Scholar
  13. 13.
    Miaw, D.: Second Skin: motion capture with actuated feedback for motor learning. MIT Thesis (2010)Google Scholar
  14. 14.
    Matyka, M., Ollila, M.: A pressure model for soft body simulation. In: Proc. of Sigrad (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Jaewon Kim
    • 1
  • Gyuchull Han
    • 1
  • Ig-Jae Kim
    • 1
  • Hyounggon Kim
    • 1
  • Sang Chul Ahn
    • 1
  1. 1.Imaging Media Research CenterKorea Institute of Science and Technology (KIST)Korea

Personalised recommendations