Skip to main content
Log in

Exploiting sensing devices availability in AR/VR deployments to foster engagement

  • S.I. : VR in Education
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

Currently, in all augmented reality (AR) or virtual reality (VR) educational experiences, the evolution of the experience (game, exercise or other) and the assessment of the user’s performance are based on her/his (re)actions which are continuously traced/sensed. In this paper, we propose the exploitation of the sensors available in the AR/VR systems to enhance the current AR/VR experiences, taking into account the users’ affect state that changes in real time. Adapting the difficulty level of the experience to the users’ affect state fosters their engagement which is a crucial issue in educational environments and prevents boredom and anxiety. The users’ cues are processed enabling dynamic user profiling. The detection of the affect state based on different sensing inputs, since diverse sensing devices exist in different AR/VR systems, is investigated, and techniques that have been undergone validation using state-of-the-art sensors are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Anagnostopoulos CN, Iliou T, Giannoukos I (2015) Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011. Artif Intell Rev 43(2):155–177

    Article  Google Scholar 

  • Baker RSJ, D’Mello SK, Rodrigo MT, Graesser AC (2010) Better to be frustrated than bored: the incidence, persistence, and impact of learners’ cognitive-affective states during interactions with three different computer-based learning environments. Int. J. Hum.-Comput. Stud. 68:223–241

    Article  Google Scholar 

  • Baltrušaitis T, Robinson P, Morency LP (2016) Openface: an open source facial behavior analysis toolkit. 2016 IEEE Winter conference on IEEE applications of computer vision (WACV)

  • Burkhardt F et al (2005) A database of german emotional speech. In: Proceedings of interspeech, Lissabon

  • Chen C, Liu K, Kehtarnavaz N (2016) Real-time human action recognition based on depth motion maps. J Real-Time Image Proc 12(1):155–163

    Article  Google Scholar 

  • Costantini G et al (2014) Emovo corpus: an Italian emotional speech database. In: Chair NCC, Choukri K, Declerck T, Loftsson H, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the ninth international conference on language resources and evaluation (LREC’14). European Language Resources Association (ELRA), Reykjavik, Iceland

  • Coutrix C et al (2012) Identifying emotions expressed by mobile users through 2D surface and 3D motion gestures. In: Proceedings of the 2012 ACM conference on ubiquitous computing

  • Csikszentmihalyi M (1996) Flow and the psychology of discovery and invention. Harper Collins, New York

    Google Scholar 

  • Csíkszentmihályi M (2008) Flow: the psychology of optimal experience. Harper Perennial, New York

    Google Scholar 

  • DigiCapital (2017) Augmented/Virtual Reality Report Q4. https://www.digi-capital.com/news/2017/01/after-mixed-year-mobile-ar-to-drive-108-billion-vrar-market-by-2021/#.WdyhmTBx3IV. Accessed 10 Oct 2017

  • Du Y, Wang W, Wang L (2015) Hierarchical recurrent neural network for skeleton based action recognition. In: Proceedings of the IEEE international conference on computer vision and pattern recognition (CVPR)

  • El Ayadi M, Kamel MS, Karray F (2011) Survey on speech emotion recognition: features, classification schemes, and databases. Pattern Recogn 44(3):572–587

    Article  Google Scholar 

  • Ghaleb E, Popa M, Hortal E, Asteriadis S (2017). Multimodal fusion based on information gain for emotion recognition in the wild, intelligent systems conference 2017, 7–8 Sept 2017, London, UK

  • Hansen DW, Ji Q (2010). In the eye of the beholder: a survey of models for eyes. IEEE Trans Pattern Anal Mach Intell

  • Haq S, Jackson P (2009) Speaker-dependent audio-visual emotion recognition. In: Proceedings of the international conference on auditory-visual speech processing (AVSP’08), Norwich, UK

  • Kanade T, Cohn JF, Lucey P, Saragih J, Ambadar Z, Matthews I (2010) The extended Cohn–Kanade Dataset (CK+): a complete expression dataset for action unit and emotion-specified expression, San Francisco, USA

  • Kim M et al (2013) A touch based affective user interface for smartphone. In: IEEE international conference on consumer electronics (ICCE). IEEE

  • Leifman G et al (2016) Learning gaze transitions from depth to improve video saliency estimation. arXiv preprint arXiv:1603.03669

  • Li W, Zhang Z, Liu Z (2010) Action recognition based on a bag of 3d points. In: Proceedings of IEEE international conference in computer vision and pattern recognition workshop (CVPRW)

  • Mora KAF, Monay F, Odobez J-M (2014) Eyediap: a database for the development and evaluation of gaze estimation algorithms from rgb and rgb-d cameras. In: Proceedings of the symposium on eye tracking research and applications. ACM

  • Nottingham Trent University (ed) (2017) Adaptation and Personalization principles based on MaTHiSiS findings. Deliverable for the MaTHiSiS project. http://www.mathisis-project.eu/en/content/d61-adaptation-and-personalization-principles-based-mathisis-findings. Accessed 3/4/2018

  • Pedregosa F et al (2011) Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12:2825–2830

    MathSciNet  MATH  Google Scholar 

  • Rikert TD, Jones MJ (1998) Gaze estimation using morphable models. In: Proceedings of third IEEE international conference on automatic face and gesture recognition. IEEE

  • Vemulapalli R, Arrate F, Chellappa R (2014) Human action recognition by representing 3d skeletons as points in a lie group. In: Proceedings of the IEEE international conference on computer vision and pattern recognition (CVPR)

  • Wang L, Zhang J, Zhou L, Tang C, Li W (2015) Beyond covariance: feature representation with nonlinear kernel matrices. In: Proceedings of the IEEE international conference on computer vision (ICCV)

  • Xiong X, De la Torre F (2013) Supervised descent method and its applications to face alignment. In: 2013 IEEE conference on computer vision and pattern recognition, Portland, pp 532–539

  • Zhang X, Sugano Y, Fritz M, Bulling A (2015) Appearance-based gaze estimation in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4511–4520

  • Zhang X, Sugano Y, Fritz M, Bulling A (2017) MPIIGaze: real-world dataset and deep appearance-based gaze estimation. IEEE Trans Pattern Anal Mach Intell

Download references

Acknowledgements

The work presented in this document has been partially funded through H2020-MaTHiSiS Project. This project has received funding from the European Union’s Horizon 2020 Programme (H2020-ICT-2015) under Grant Agreement No. 687772.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Panagiotis Trakadas.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vretos, N., Daras, P., Asteriadis, S. et al. Exploiting sensing devices availability in AR/VR deployments to foster engagement. Virtual Reality 23, 399–410 (2019). https://doi.org/10.1007/s10055-018-0357-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-018-0357-0

Keywords

Navigation