Advertisement

Mid-Air Gestures for Virtual Modeling with Leap Motion

  • Jian Cui
  • Dieter W. Fellner
  • Arjan Kuijper
  • Alexei Sourin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9749)

Abstract

We study to which extent Leap Motion can be used for mid-air interaction while working on various virtual assembling and shape modeling tasks. First, we outline the conceptual design phase, which is done by studying and classification of how human hands are used for various creative tasks in real life. Then, during the phase of the functional design, we propose our hypothesis how to efficiently implement and use natural gestures with Leap Motion and introduce the ideas of the algorithms. Next we describe the implementation phase of the gestures in virtual environment. It is followed by the user study proving our concept.

Keywords

Virtual Environment Gesture Recognition Hand Gesture Virtual Object Virtual Hand 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Borst, C.W., Indugula, A.P.: A spring model for whole-hand virtual grasping. Presence: Teleoper. Virtual Environ. 15(1), 47–61 (2006)CrossRefGoogle Scholar
  2. 2.
    Braun, A., Wichert, R., Kuijper, A., Fellner, D.W.: Capacitive proximity sensing in smart environments. JAISE 7(4), 483–510 (2015)Google Scholar
  3. 3.
    Cui, J., Sourin, A.: Feasibility study on free hand geometric modelling using leap motion in VRML/X3D. In: Proceedings of the 2014 International Conference on Cyberworlds, CW 2014, pp. 389–392 (2014)Google Scholar
  4. 4.
    Dipietro, L., Sabatini, A., Dario, P.: A survey of glove-based systems and their applications. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 38(4), 461–482 (2008)CrossRefGoogle Scholar
  5. 5.
    Eicke, T.N., Jung, Y., Kuijper, A.: Stable dynamic webshadows in the X3DOM framework. Expert Syst. Appl. 42(7), 3585–3609 (2015)CrossRefGoogle Scholar
  6. 6.
    Feix, T., Pawlik, R., Schmiedmayer, H., Romero, J., Kragic, D.: A comprehensive grasp taxonomy. In: Robotics, Science and Systems: Workshop on Understanding the Human Hand for Advancing Robotic Manipulation (2009)Google Scholar
  7. 7.
    Fuge, M., Yumer, M.E., Orbay, G., Kara, L.B.: Conceptual design and modification of freeform surfaces using dual shape representations in augmented reality environments. Comput. Aided Des. 44(10), 1020–1032 (2012)CrossRefGoogle Scholar
  8. 8.
    Garbaya, S., Zaldivar-Colado, U.: The affect of contact force sensations on user performance in virtual assembly tasks. Virtual Reality 11(4), 287–299 (2007)CrossRefGoogle Scholar
  9. 9.
    Grosse-Puppendahl, T., Herber, S., Wimmer, R., Englert, F., Beck, S., von Wilmsdorff, J., Wichert, R., Kuijper, A.: Capacitive near-field communication for ubiquitous interaction and perception. In: The 2014 ACM Conference on Ubiquitous Computing, UbiComp 2014, Seattle, WA, USA, 13–17 September 2014, pp. 231–242 (2014)Google Scholar
  10. 10.
    Große-Puppendahl, T.A., Braun, A., Kamieth, F., Kuijper, A.: Swiss-cheese extended: an object recognition method for ubiquitous interfaces based on capacitive proximity sensing. In: 2013 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2013, Paris, France, 27 April–2 May 2013, pp. 1401–1410 (2013)Google Scholar
  11. 11.
    Huagen, W., Shuming, G., Qunsheng, P.: Virtual grasping for virtual assembly tasks. In: Third International Conference on Image and Graphics (ICIG 2004), pp. 448–451 (2004)Google Scholar
  12. 12.
    Lai, D., Sourin, A.: Interactive free-form shape modeling in cyberworlds. Vis. Comput. 29(10), 1027–1037 (2013)CrossRefGoogle Scholar
  13. 13.
    Markussen, A., Jakobsen, M.R., Hornbæk, K.: Vulture: a mid-air word-gesture keyboard. In: Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, CHI 2014, pp. 1073–1082 (2014)Google Scholar
  14. 14.
    Moser, C., Tscheligi, M.: Physics-based gaming: exploring touch vs. mid-air gesture input. In: Proceedings of the 14th International Conference on Interaction Design and Children, IDC 2015, pp. 291–294 (2015)Google Scholar
  15. 15.
    Nancel, M., Wagner, J., Pietriga, E., Chapuis, O., Mackay, W.: Mid-air pan-and-zoom on wall-sized displays. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2011, pp. 177–186 (2011)Google Scholar
  16. 16.
    Nickel, K., Stiefelhagen, R.: Pointing gesture recognition based on 3D-tracking of face, hands and head orientation. In: Proceedings of the 5th International Conference on Multimodal Interfaces, ICMI 2003, pp. 140–146 (2003)Google Scholar
  17. 17.
    Piumsomboon, T., Clark, A., Billinghurst, M., Cockburn, A.: User-defined gestures for augmented reality. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems, CHI EA 2013, pp. 955–960 (2013)Google Scholar
  18. 18.
    Schkolne, S., Pruett, M., Schröder, P.: Surface drawing: creating organic 3D shapes with the hand and tangible tools. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2001, pp. 261–268 (2001)Google Scholar
  19. 19.
    Segen, J., Kumar, S.: Gesture VR: Vision-based 3D hand interace for spatial interaction. In: Proceedings of the Sixth ACM International Conference on Multimedia, MULTIMEDIA 1998, pp. 455–464 (1998)Google Scholar
  20. 20.
    Sharp, T., Keskin, C., Robertson, D., Taylor, J., Shotton, J., Kim, D., Rhemann, C., Leichter, I., Vinnikov, A., Wei, Y., Freedman, D., Kohli, P., Krupka, E., Fitzgibbon, A., Izadi, S.: Accurate, robust, and flexible real-time hand tracking. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, pp. 3633–3642 (2015)Google Scholar
  21. 21.
    Song, P., Goh, W.B., Hutama, W., Fu, C.W., Liu, X.: A handle bar metaphor for virtual object manipulation with mid-air interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2012, pp. 1297–1306 (2012)Google Scholar
  22. 22.
    Stein, C., Limper, M., Kuijper, A.: Spatial data structures to accelerate the visibility determination for large model visualization on the web. In: Web3D14, pp. 53–61 (2014)Google Scholar
  23. 23.
    Murugappan, S., Liu, H., Ramani, K.: Shape-it-up: hand gesture based creative expression of 3D shapes using intelligent generalized cylinders. Comput. Aided Des. 45(2), 277–287 (2013)CrossRefGoogle Scholar
  24. 24.
    Wang, R., Paris, S., Popović, J.: 6D hands: markerless hand-tracking for computer aided design. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST 2011, pp. 549–558 (2011)Google Scholar
  25. 25.
    Yang, C.C., Lin, P.J., Sun, C.C.: Product form design using virtual hand and deformable models. Int. J. Digit. Content Technol. Appl. 6(11), 8–17 (2012)CrossRefGoogle Scholar
  26. 26.
    Yoon, S.M., Kuijper, A.: Human action recognition using segmented skeletal features. In: 20th International Conference on Pattern Recognition, ICPR 2010, Istanbul, Turkey, 23–26 August 2010, pp. 3740–3743 (2010)Google Scholar
  27. 27.
    Yoon, S.M., Kuijper, A.: Human action recognition based on skeleton splitting. Expert Syst. Appl. 40(17), 6848–6855 (2013)CrossRefGoogle Scholar
  28. 28.
    Zachmann, G., Rettig, A.: Natural and robust interaction in virtual assembly simulation. In: Eighth ISPE International Conference on Concurrent Engineering: Research and Applications ISPE/CE2001, July 2001Google Scholar
  29. 29.
    Zhang, X., Sourin, A.: Image-inspired haptic interaction. J. Visual. Comput. Anim. 26(3–4), 311–319 (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Jian Cui
    • 1
    • 2
  • Dieter W. Fellner
    • 1
    • 3
  • Arjan Kuijper
    • 1
    • 3
  • Alexei Sourin
    • 2
  1. 1.Technische Universität DarmstadtDarmstadtGermany
  2. 2.School of Computer EngineeringNanyang Technological UniversitySingaporeSingapore
  3. 3.Fraunhofer IGDDarmstadtGermany

Personalised recommendations