Personal and Ubiquitous Computing

, Volume 21, Issue 2, pp 203–217 | Cite as

Experiencing touchless interaction with augmented content on wearable head-mounted displays in cultural heritage applications

  • Nadia Brancati
  • Giuseppe Caggianese
  • Maria Frucci
  • Luigi Gallo
  • Pietro Neroni
Original Article

Abstract

The cultural heritage could benefit significantly from the integration of wearable augmented reality (AR). This technology has the potential to guide the user and provide her with both in-depth information, without distracting her from the context, and a natural interaction, which can further allow her to explore and navigate her way through a huge amount of cultural information. The integration of touchless interaction and augmented reality is particularly challenging. On the technical side, the human–machine interface has to be reliable so as to guide users across the real world, which is composed of cluttered backgrounds and severe changes in illumination conditions. On the user experience side, the interface has to provide precise interaction tools while minimizing the perceived task difficulty. In this study, an interactive wearable AR system to augment the environment with cultural information is described. To confer robustness to the interface, a strategy that takes advantage of both depth and color data to find the most reliable information on each single frame is introduced. Moreover, the results of an ISO 9241-9 user study performed in both indoor and outdoor conditions are presented and discussed. The experimental results show that, by using both depth and color data, the interface can behave consistently in different indoor and outdoor scenarios. Furthermore, the results show that the presence of a virtual pointer in the augmented visualization significantly reduces the users error rate in selection tasks.

Keywords

Touchless interaction Wearable augmented reality Point-and-click interface RGB-D camera User study 

References

  1. 1.
    Amato F, Chianese A, Mazzeo A, Moscato V, Picariello A, Piccialli F (2013) The talking museum project. Proc Comput Sci 21:114–121CrossRefGoogle Scholar
  2. 2.
    Bai H, Gao L, El-Sana J, Billinghurst M (2013) Markerless 3d gesture-based interaction for handheld augmented reality interfaces. In: Proceedings of the international symposium on mixed and augmented reality (ISMAR). IEEE, pp 1–6Google Scholar
  3. 3.
    Baraldi L, Paci F, Serra G, Benini L, Cucchiara R (2014) Gesture recognition in ego-centric videos using dense trajectories and hand segmentation. In: 2014 IEEE conference on computer vision and pattern recognition workshops (CVPRW). IEEE, pp 702–707Google Scholar
  4. 4.
    Baraldi L, Paci F, Serra G, Benini L, Cucchiara R (2015) Gesture recognition using wearable vision sensors to enhance visitors museum experiences. IEEE Sens J 15(5):2705–2714Google Scholar
  5. 5.
    Betancourt A (2014) A sequential classifier for hand detection in the framework of egocentric vision. In: 2014 IEEE conference on computer vision and pattern recognition workshops (CVPRW). IEEE, pp 600–605Google Scholar
  6. 6.
    Betancourt A, Morerio P, Marcenaro L, Barakova E, Rauterberg M, Regazzoni C (2015) Towards a unified framework for hand-based methods in first person vision. In: 2015 IEEE international conference on multimedia and expo workshops (ICMEW). IEEE, pp 1–6Google Scholar
  7. 7.
    Betancourt A, Morerio P, Regazzoni CS, Rauterberg M (2015) The evolution of first person vision methods: a survey. IEEE Trans Circuits Syst Video Technol 25(5):744–760CrossRefGoogle Scholar
  8. 8.
    Brancati N, Caggianese G, Frucci M, Gallo L, Neroni P (2015) Robust fingertip detection in egocentric vision under varying illumination conditions. In: 2015 IEEE international conference on multimedia and expo workshops (ICMEW). IEEE, pp 1–6Google Scholar
  9. 9.
    Brancati N, De Pietro G, Frucci M, Gallo L (2016) Dynamic clustering for skin detection in ycbcr colour space. In: 2016 International conference on pattern recognition and information processing (PRIP) (in press)Google Scholar
  10. 10.
    Caggianese G, Neroni P, Gallo L (2014) Natural interaction and wearable augmented reality for the enjoyment of the cultural heritage in outdoor conditions. In: Augmented and virtual reality. Springer, pp 267–282Google Scholar
  11. 11.
    Chianese A, Marulli F, Piccialli F, Benedusi P, Jung JE (2016) An associative engines based approach supporting collaborative analytics in the internet of cultural things. Future generation computer systemsGoogle Scholar
  12. 12.
    Chianese A, Piccialli F (2015) Improving user experience of cultural environment through IOT: the beauty or the truth case study. In: Intelligent interactive multimedia systems and services. Springer, pp 11–20Google Scholar
  13. 13.
    Ciolini A, Seidenari L, Karaman S, Del Bimbo A (2015) Efficient hough forest object detection for low-power devices. In: 2015 IEEE international conference on multimedia and expo workshops (ICMEW). IEEE, pp 1–6Google Scholar
  14. 14.
    Gallo L, Minutolo A (2012) Design and comparative evaluation of smoothed pointing: a velocity-oriented remote pointing enhancement technique. Int J Hum Comput Stud 70(4):287–300CrossRefGoogle Scholar
  15. 15.
    Grossman T, Balakrishnan R (2005) A probabilistic approach to modeling two-dimensional pointing. ACM Trans Comput Hum Interact 12(3):435–459CrossRefGoogle Scholar
  16. 16.
    Harrison C, Benko H, Wilson AD (2011) Omnitouch: wearable multitouch interaction everywhere. In: Proceedings of the 24th annual ACM symposium on User interface software and technology. ACM, pp 441–450Google Scholar
  17. 17.
    ISO/DIS 9241-9 (2000) Ergonomic requirements for office work with visual display terminals (VDTs)—part 9: requirements for non-keyboard input devices. International standard, International Organization for StandardizationGoogle Scholar
  18. 18.
    Jang Y, Noh ST, Chang HJ, Kim TK, Woo W (2015) 3d finger cape: clicking action and position estimation under self-occlusions in egocentric viewpoint. IEEE Trans Vis Comput Graph 21(4):501–510CrossRefGoogle Scholar
  19. 19.
    Keskin C, Kıraç F, Kara YE, Akarun L (2013) Real time hand pose estimation using depth sensors. In: Consumer depth cameras for computer vision. Springer, pp 119–137Google Scholar
  20. 20.
    Khan R, Hanbury A, Stoettinger J (2010) Skin detection: a random forest approach. In: 2010 17th IEEE international conference on image processing (ICIP). IEEE, pp 4613–4616Google Scholar
  21. 21.
    Klompmaker F, Nebe K, Fast A (2012) dSensingNI: a framework for advanced tangible interaction using a depth camera. In: Proceedings of the sixth international conference on tangible, embedded and embodied interaction. ACM, pp 217–224Google Scholar
  22. 22.
    Lee T, Höllerer T (2007) Handy AR: markerless inspection of augmented reality objects using fingertip tracking. In: 2007 11th IEEE international symposium on wearable computers. IEEE, pp 83–90Google Scholar
  23. 23.
    Lee T, Höllerer T (2009) Multithreaded hybrid feature tracking for markerless augmented reality. IEEE Trans Vis Comput Graph 15(3):355–368CrossRefGoogle Scholar
  24. 24.
    Li C, Kitani KM (2013) Pixel-level hand detection in ego-centric videos. In: 2013 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 3570–3577Google Scholar
  25. 25.
    Lu Z, Grauman K (2013) Story-driven summarization for egocentric video. In: 2013 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 2714–2721Google Scholar
  26. 26.
    Mayol W, Murray D (2005) Wearable hand activity recognition for event summarization. In: Ninth IEEE international symposium on wearable computers, pp 122–129Google Scholar
  27. 27.
    Moghimi M, Azagra P, Montesano L, Murillo AC, Belongie S (2014) Experiments on an RGB-D wearable vision system for egocentric activity recognition. In: 2014 IEEE conference on computer vision and pattern recognition workshops (CVPRW). IEEE, pp 611–617Google Scholar
  28. 28.
    Oikonomidis I, Kyriazis N, Argyros AA (2011) Efficient model-based 3d tracking of hand articulations using kinect. In: BmVC, vol 1, p 3Google Scholar
  29. 29.
    Palacios JM, Sagüés C, Montijano E, Llorente S (2013) Human–computer interaction based on hand gestures using RGB-D sensors. Sensors 13(9):11842–11860CrossRefGoogle Scholar
  30. 30.
    Piumsomboon T, Altimira D, Kim H, Clark A, Lee G, Billinghurst M (2014) Grasp-shell vs gesture-speech: a comparison of direct and indirect natural interaction techniques in augmented reality. In: 2014 IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 73–82Google Scholar
  31. 31.
    Piumsomboon T, Clark A, Billinghurst M, Cockburn A (2013) User-defined gestures for augmented reality. In: Human–computer interaction—INTERACT 2013. Springer, pp 282–299Google Scholar
  32. 32.
    Powers DM (2007) Evaluation: from precision, recall and f-measure to ROC, informedness, markedness and correlation. Technical report, SIE-07-001, School of Informatics and Engineering, Flinders University of South Australia, AdelaideGoogle Scholar
  33. 33.
    Ren X, Gu C (2010) Figure-ground segmentation improves handled object recognition in egocentric video. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 3137–3144Google Scholar
  34. 34.
    Ren Z, Meng J, Yuan J (2011) Depth camera based hand gesture recognition and its applications in human-computer-interaction. In: Proceedings of international conference on information, communications and signal processing (ICICS). IEEE, pp 1–5Google Scholar
  35. 35.
    Ren Z, Yuan J, Zhang Z (2011) Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera. In: Proceedings of the 19th ACM international conference on multimedia. ACM, pp 1093–1096Google Scholar
  36. 36.
    Rogez G, Supancic JS, Ramanan D (2015) First-person pose recognition using egocentric workspaces. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4325–4333Google Scholar
  37. 37.
    Rogez G, Supancic III JS, Khademi M, Montiel JMM, Ramanan D (2014) 3D hand pose detection in egocentric RGB-D images. arXiv preprint arXiv:1412.0065
  38. 38.
    Serra G, Camurri M, Baraldi L, Benedetti M, Cucchiara R (2013) Hand segmentation for gesture recognition in ego-vision. In: Proceedings of the 3rd ACM international workshop on Interactive multimedia on mobile and portable devices. ACM, pp 31–36Google Scholar
  39. 39.
    Starner T (2013) Project glass: an extension of the self. IEEE Pervasive Comput 12(2):14–16CrossRefGoogle Scholar
  40. 40.
    Starner T, Mann S, Rhodes B, Levine J, Healey J, Kirsch D, Picard RW, Pentland A (1997) Augmented reality through wearable computing. Presence Teleoper Virtual Environ 6(4):386–398CrossRefGoogle Scholar
  41. 41.
    Supancic JS, Rogez G, Yang Y, Shotton J, Ramanan D (2015) Depth-based hand pose estimation: data, methods, and challenges. In: Proceedings of the IEEE international conference on computer vision, pp 1868–1876Google Scholar
  42. 42.
    Wen Y, Hu C, Yu G, Wang C (2012) A robust method of detecting hand gestures using depth sensors. In: Proceedings of the international workshop on haptic audio visual environments and games (HAVE). IEEE, pp 72–77Google Scholar
  43. 43.
    Wobbrock JO, Shinohara K, Jansen A (2011) The effects of task dimensionality, endpoint deviation, throughput calculation, and experiment design on pointing measures and models. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 1639–1648Google Scholar

Copyright information

© Springer-Verlag London 2016

Authors and Affiliations

  • Nadia Brancati
    • 1
  • Giuseppe Caggianese
    • 1
  • Maria Frucci
    • 1
  • Luigi Gallo
    • 1
  • Pietro Neroni
    • 1
  1. 1.Institute of High Performance Computing and Networking (ICAR-CNR)National Research Council of ItalyNaplesItaly

Personalised recommendations