Kinect Web Kiosk Framework

  • Ciril Bohak
  • Matija Marolt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7946)

Abstract

In this paper we present a web kiosk framework based on Kinect sensor. The main idea is to use the framework for creation of simple interactive presentations for informing, advertising and presenting knowledge to the public. The use of such a framework simplifies adaptation of existing web materials for presentation with the kiosk. We can also make use of touchless interaction for browsing through the interactive content, to animate the user and encourage her to spend more time browsing the presented content. We present the structure of the framework and a simple case study on using the framework as an interactive presentation platform and as an education resource. The developed framework has been used for presenting information on educational programs at Faculty of Computer and Information Science, University of Ljubljana.

Keywords

HCI Kinect interactive kiosk presentation interactivity interaction framework 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    de la Barré, R., Chojecki, P., Leiner, U., Mühlbach, L., Ruschin, D.: Touchless interaction-novel chances and challenges. In: Jacko, J.A. (ed.) HCI International 2009, Part II. LNCS, vol. 5611, pp. 161–169. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  2. 2.
    Bohak, C.: Gesture based user interface (2007)Google Scholar
  3. 3.
    Burnar, S.: Computer interaction using kinect sensor (2012)Google Scholar
  4. 4.
    Fikkert, W., van der Vet, P., Nijholt, A.: User-evaluated gestures for touchless interactions from a distance. In: 12th IEEE International Symposium on Multimedia, ISM 2010, Taichung, Taiwan, December 13-15, pp. 153–160. IEEE Computer Society (2010)Google Scholar
  5. 5.
    Held, R., Gupta, A., Curless, B., Agrawala, M.: 3d puppetry: a kinect-based interface for 3d animation. In: Miller, R., Benko, H., Latulipe, C. (eds.) UIST, pp. 423–434. ACM (2012)Google Scholar
  6. 6.
    Hirte, S., Seifert, A., Baumann, S., Klan, D., Sattler, K.U.: Data3 – a kinect interface for olap using complex event processing. In: Proceedings of the 2012 IEEE 28th International Conference on Data Engineering, pp. 1297–1300. IEEE Computer Society, Washington, DC (2012)CrossRefGoogle Scholar
  7. 7.
    Johnson, R., O’Hara, K., Sellen, A., Cousins, C., Criminisi, A.: Exploring the potential for touchless interaction in image-guided interventional radiology. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2011, pp. 3323–3332. ACM, New York (2011)CrossRefGoogle Scholar
  8. 8.
    Papadopoulos, C., Sugarman, D., Kaufmant, A.: Nunav3d: A touch-less, body-driven interface for 3d navigation. In: Virtual Reality Short Papers and Posters (VRW), pp. 67–68. IEEE (March 2012)Google Scholar
  9. 9.
    Ryu, D., Um, D., Tanofsky, P., Koh, D.H., Ryu, Y.S., Kang, S.: T-less: A novel touchless human-machine interface based on infrared proximity sensing. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5220–5225 (October 2010)Google Scholar
  10. 10.
    Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1297–1304. IEEE Computer Society, Washington, DC (2011)Google Scholar
  11. 11.
    Spano, L.D.: Developing touchless interfaces with gestIT. In: Paternò, F., de Ruyter, B., Markopoulos, P., Santoro, C., van Loenen, E., Luyten, K. (eds.) AmI 2012. LNCS, vol. 7683, pp. 433–438. Springer, Heidelberg (2012)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Ciril Bohak
    • 1
  • Matija Marolt
    • 1
  1. 1.Faculty of Computer and Information ScienceUniverity of LjubljanaSlovenia

Personalised recommendations