Virtual Interface and Its Application in Natural Interaction

Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9292)

Abstract

To gesture interaction based on computer vision, the effective area of hand activity is often the whole reach captured by cameras, so some subconscious action of users may be interpreted as effective computer command. In order to address this problem, we put forward the concept of virtual interface, which makes the effective area of hand activity narrow to a specific region. As the gesture in the virtual interface is regarded as effective, while the gesture outside it is regarded as invalid, it is possible to solve the “Midas Touch Problem”. To begin with, we identify the position and size of virtual interface by utilizing the least square method through learning process. Then users interact with computer by virtual interface, during which the gesture commands are released through it. The experimental results show that the proposed virtual interface can efficiently solve the “Midas Touch Problem” and has a good user experience.

Keywords

Virtual interface Midas touch problem Gesture interaction 

Notes

Acknowledgment

This paper is supported by the National Natural Science Foundation of China (Nos. 61173079 and 61472163) and the Science and technology project of Shandong Province (No. 2015GGX101025).

References

  1. 1.
    Wu, X., Zhang, Q., Xu, Y.X.: An overview of hand gesture recognition. Electron. Sci. Tech. 26(6), 171–174 (2013)Google Scholar
  2. 2.
    Feng, Z.Q., Jiang, Y.: A survey of hand gesture recognition. J. Univ. Jinan (Sci. Tech.) 27(4), 336–341 (2013)Google Scholar
  3. 3.
    Wu, Y.H., Zhang, F.J., Liu, Y.J., Dai, G.Z.: Research on key issue of vision-based gesture interfaces. Chin. J. Comput. 2030–2041 (2009)Google Scholar
  4. 4.
    Jacob, R.J.K.: Eye-movement-based human-computer interaction techniques: toward non-command interfaces. In: Proceedings of the Advances in Human-Computer Interaction, pp. 151–190. Ablex Publishing Corporation, Norwood (1993)Google Scholar
  5. 5.
    Kato, H., Billinghurst, M., Poupyrev, I.: Virtual object manipulation on a table-top AR environment. In: Proceedings of the ISAR 2000, Munich, pp. 111–119 (2000)Google Scholar
  6. 6.
    Kjeldsen, R., Levas, A., Pinhanez, C.: Dynamically reconfigurable vision-based user interfaces. Mach. Vis. Appl. 16(1), 6–12 (2004)CrossRefMATHGoogle Scholar
  7. 7.
    Liu, S.T., Yin, F.L.: The Basic principle and its new advance of image segmentation methods. Acta Automatica Sinica 13, 121–124 (2003)Google Scholar
  8. 8.
    Xu, X.Z., Ding, S.F., Shi, Z.Z., Jia, W.K.: New theories and methods of image segmentation. Acta Electronica Sinica S1, 76–82 (2010)Google Scholar
  9. 9.
    Min, F.W.: Research on Real-time Hand Tracking Algorithm Based on Cognitive-behavioral Model library and Kinect. University of Jinan (2014)Google Scholar
  10. 10.
    Feng, Z.Q., Yang, B., Zheng, Y.W., Xu, T., Li, Y., Zhu, D.L.: Gesture feature detection based on feature points distribution analysis. Comput. Integr. Manufact. Syst. 17(11), 2333–2342 (2011)Google Scholar
  11. 11.
    Feng, Z.Q., Yang, B., Li, Y., Zheng, Y.W., Zhang, S.B.: Research on hand gesture tracking based on particle filtering aiming at optimizing time cost. Acta Electronica Sinica 37(9), 1989–1995 (2009)Google Scholar
  12. 12.
    Yang, B., Song, X.N., Hao, X.Y.: Gesture recognition in complex background based on distribution feature of hand. J. Comput.-Aided Des. Comput. Graph. 22(10), 1841–1848 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  • Hui Liu
    • 1
    • 2
  • Zhiquan Feng
    • 1
    • 2
  • Liwei Liang
    • 1
    • 2
  • Zhipeng Xu
    • 1
    • 2
  1. 1.Department of Information Science and EngineeringUniversity of JinanJinanChina
  2. 2.Shandong Provincial Key Laboratory of Network Based Intelligent ComputingJinanChina

Personalised recommendations