Skip to main content

GazeRoomLock: Using Gaze and Head-Pose to Improve the Usability and Observation Resistance of 3D Passwords in Virtual Reality

  • Conference paper
  • First Online:
Augmented Reality, Virtual Reality, and Computer Graphics (AVR 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12242))

Abstract

Authentication has become an important component of Immersive Virtual Reality (IVR) applications, such as virtual shopping stores, social networks, and games. Recent work showed that compared to traditional graphical and alphanumeric passwords, a more promising form of passwords for IVR is 3D passwords. This work evaluates four multimodal techniques for entering 3D passwords in IVR that consist of multiple virtual objects selected in succession. Namely, we compare eye gaze and head pose for pointing, and dwell time and tactile input for selection. A comparison of a) usability in terms of entry time, error rate, and memorability, and b) resistance to real world and offline observations, reveals that: multimodal authentication in IVR by pointing at targets using gaze, and selecting them using a handheld controller significantly improves usability and security compared to the other methods and to prior work. We discuss how the choice of pointing and selection methods impacts the usability and security of 3D passwords in IVR.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Some HMDs show the user’s view on a nearby screen. This feature must be automatically disabled during authentication. We expect all HMDs will become untethered.

References

  1. Abdrabou, Y., Khamis, M., Eisa, R.M., Ismail, S., Elmougy, A.: Just gaze and wave: exploring the use of gaze and gestures for shoulder-surfing resilient authentication. In: Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. ETRA 2019. ACM, New York (2019). https://doi.org/10.1145/3314111.3319837. http://doi.acm.org/10.1145/3314111.3319837

  2. Alallah, F., et al.: Performer vs. observer: whose comfort level should we consider when examining the social acceptability of input modalities for head-worn display? In: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology. VRST 2018. ACM, New York (2018). https://doi.org/10.1145/3281505.3281541. http://doi.acm.org.emedien.ub.uni-muenchen.de/10.1145/3281505.3281541

  3. Alsulaiman, F., El Saddik, A.: Three-dimensional password for more secure authentication. IEEE Trans. Instrum. Meas. 57, 1929–1938 (2008). https://doi.org/10.1109/TIM.2008.919905

    Article  Google Scholar 

  4. Andrist, S., Gleicher, M., Mutlu, B.: Looking coordinated: bidirectional gaze mechanisms for collaborative interaction with virtual characters. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. CHI 2017. ACM, New York (2017). https://doi.org/10.1145/3025453.3026033. http://doi.acm.org/10.1145/3025453.3026033

  5. Attree, E., Brooks, B., Rose, F., Andrews, T., Leadbetter, A., Clifford, B.: Memory processes and virtual environments: i can’t remember what was there, but i can remember how i got there. Implications for people with disabilities. In: ECDVRAT: 1st European Conference on Disability, Virtual Reality and Associated Technologies, Reading, UK, vol. 118 (1996)

    Google Scholar 

  6. Aviv, A.J., Gibson, K., Mossop, E., Blaze, M., Smith, J.M.: Smudge attacks on smartphone touch screens. In: Proceedings of the 4th USENIX Conference on Offensive Technologies. WOOT 2010. USENIX Association, Berkeley (2010). http://dl.acm.org/citation.cfm?id=1925004.1925009

  7. Brignull, H., Rogers, Y.: Enticing people to interact with large public displays in public spaces (2003)

    Google Scholar 

  8. Chan, L.W., Kao, H.S., Chen, M.Y., Lee, M.S., Hsu, J., Hung, Y.P.: Touching the void: direct-touch interaction for intangible displays. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI 2010. ACM, New York (2010). https://doi.org/10.1145/1753326.1753725. http://doi.acm.org/10.1145/1753326.1753725

  9. Clayton, D.: Repeated ordinal measurements: a generalised estimating equation approach (1992)

    Google Scholar 

  10. De Luca, A., Denzel, M., Hussmann, H.: Look into my eyes!: can you guess my password? In: Proceedings of the 5th Symposium on Usable Privacy and Security. SOUPS 2009, ACM, New York (2009). https://doi.org/10.1145/1572532.1572542. http://doi.acm.org/10.1145/1572532.1572542

  11. De Luca, A., et al.: Now you see me, now you don’t: protecting smartphone authentication from shoulder surfers. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI 2014. ACM, New York (2014). https://doi.org/10.1145/2556288.2557097. http://doi.acm.org/10.1145/2556288.2557097

  12. Drewes, H., Schmidt, A.: Interacting with the computer using gaze gestures. In: Baranauskas, C., Palanque, P., Abascal, J., Barbosa, S.D.J. (eds.) INTERACT 2007. LNCS, vol. 4663, pp. 475–488. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74800-7_43

    Chapter  Google Scholar 

  13. Esteves, A., Velloso, E., Bulling, A., Gellersen, H.: Orbits: gaze interaction for smart watches using smooth pursuit eye movements. In: Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. UIST 2015. ACM, New York (2015). https://doi.org/10.1145/2807442.2807499. http://doi.acm.org/10.1145/2807442.2807499

  14. Esteves, A., Verweij, D., Suraiya, L., Islam, R., Lee, Y., Oakley, I.: Smoothmoves: smooth pursuits head movements for augmented reality. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. UIST 2017. ACM, New York (2017). https://doi.org/10.1145/3126594.3126616. http://doi.acm.org/10.1145/3126594.3126616

  15. Forget, A., Chiasson, S., Biddle, R.: Shoulder-surfing resistance with eye-gaze entry in cued-recall graphical passwords. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI 2010. ACM, New York (2010). https://doi.org/10.1145/1753326.1753491. http://doi.acm.org/10.1145/1753326.1753491

  16. George, C., Buschek, D., Khamis, M., Hussmann, H.: Investigating the third dimension for authentication in immersive virtual reality and in the real world. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (2019)

    Google Scholar 

  17. George, C., Janssen, P., Heuss, D., Alt, F.: Should i interrupt or not?: understanding interruptions in head-mounted display settings. In: Proceedings of the 2019 on Designing Interactive Systems Conference. DIS 2019. ACM, New York (2019). https://doi.org/10.1145/3322276.3322363. http://doi.acm.org/10.1145/3322276.3322363

  18. George, C., et al.: Seamless and secure VR: adapting and evaluating established authentication systems for virtual reality. In: Proceedings of the Network and Distributed System Security Symposium (NDSS 2017). USEC 2017. Internet Society (2017). https://doi.org/10.14722/usec.2017.23028. http://dx.doi.org/10.14722/usec.2017.23028

  19. Gugenheimer, J., Mai, C., Mcgill, M., Williamson, J.R., Steinicke, F., Perlin, K.: Challenges using head-mounted displays in shared and social spaces. In: Proceedings of the 37th Annual ACM Conference on Human Factors in Computing Systems. CHI EA 2019. ACM, New York (2019)

    Google Scholar 

  20. Gurary, J., Zhu, Y., Fu, H.: Leveraging 3D benefits for authentication. Int. J. Commun. Netw. Syst. Sci. 10, 324–338 (2017). https://doi.org/10.4236/ijcns.2017.108B035

    Article  Google Scholar 

  21. Harbach, M., von Zezschwitz, E., Fichtner, A., Luca, A.D., Smith, M.: It’s a hard lock life: a field study of smartphone (un)locking behavior and risk perception. In: Symposium On Usable Privacy and Security (SOUPS 2014), pp. 213–230. USENIX Association, Menlo Park (2014). https://www.usenix.org/conference/soups2014/proceedings/presentation/harbach

  22. Jacob, R.J.K.: The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Trans. Inf. Syst. 9(2) (1991). https://doi.org/10.1145/123078.128728. http://doi.acm.org/10.1145/123078.128728

  23. John, B., Koppal, S., Jain, E.: Eyeveil: degrading iris authentication in eye tracking headsets. In: Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. ETRA 2019. ACM, New York (2019). https://doi.org/10.1145/3314111.3319816. http://doi.acm.org/10.1145/3314111.3319816

  24. Katsini, C., Abdrabou, Y., Raptis, G.E., Khamis, M., Alt, F.: The role of eye gaze in security and privacy applications: survey and future HCI research directions. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. CHI 2020, pp. 1–21. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3313831.3376840. https://doi.org/10.1145/3313831.3376840

  25. Keith, M., Shao, B., Steinbart, P.: A behavioral analysis of passphrase design and effectiveness. J. Assoc. Inf. Syst. 10(2) (2009). https://aisel.aisnet.org/jais/vol10/iss2/2

  26. Pfeuffer, K., Geiger, M.J., Prange, S., Mecke, L., Buschek, D., Alt, F.: Behavioural biometrics in VR: identifying people from body motion and relations in virtual reality. In: Proceedings of the 37th Annual ACM Conference on Human Factors in Computing Systems. CHI 2019. ACM, New York (2019). https://doi.org/10.1145/3290605.3300340. https://doi.org/10.1145/3290605.3300340

  27. Khamis, M., Alt, F., Hassib, M., von Zezschwitz, E., Hasholzner, R., Bulling, A.: Gazetouchpass: multimodal authentication using gaze and touch on mobile devices. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. CHI EA 2016. ACM, New York (2016). https://doi.org/10.1145/2851581.2892314. http://doi.acm.org/10.1145/2851581.2892314

  28. Khamis, M., Hassib, M., Zezschwitz, E.V., Bulling, A., Alt, F.: Gazetouchpin: protecting sensitive data on mobile devices using secure multimodal authentication. In: Proceedings of the 19th ACM International Conference on Multimodal Interaction. ICMI 2017. ACM, New York (2017). https://doi.org/10.1145/3136755.3136809. http://doi.acm.org/10.1145/3136755.3136809

  29. Khamis, M., Oechsner, C., Alt, F., Bulling, A.: Vrpursuits: interaction in virtual reality using smooth pursuit eye movements. In: Proceedings of the 2018 International Conference on Advanced Visual Interfaces. AVI 2018. ACM, New York (2018)

    Google Scholar 

  30. Khamis, M., et al.: Cueauth: comparing touch, mid-air gestures, and gaze for cue-based authentication on situated displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(4) (2018). https://doi.org/10.1145/3287052. https://doi.org/10.1145/3287052

  31. Kinnunen, T., Sedlak, F., Bednarik, R.: Towards task-independent person authentication using eye movement signals. In: Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. ETRA 2010, ACM, New York (2010). https://doi.org/10.1145/1743666.1743712. http://doi.acm.org/10.1145/1743666.1743712

  32. Kumar, M., Garfinkel, T., Boneh, D., Winograd, T.: Reducing shoulder-surfing by using gaze-based password entry. In: Proceedings of the 3rd Symposium on Usable Privacy and Security. SOUPS 2007. ACM, New York (2007). https://doi.org/10.1145/1280680.1280683. http://doi.acm.org/10.1145/1280680.1280683

  33. Kytö, M., Ens, B., Piumsomboon, T., Lee, G.A., Billinghurst, M.: Pinpointing: precise head- and eye-based target selection for augmented reality. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. CHI 2018. ACM, New York (2018). https://doi.org/10.1145/3173574.3173655. http://doi.acm.org.emedien.ub.uni-muenchen.de/10.1145/3173574.3173655

  34. Labs, P.: HTC Vive eye tracking add on (2016). https://pupil-labs.com/blog/2016-08/htc-vive-eye-tracking-add-on/. Accessed 01 Apr 2019

  35. Mai, C., Khamis, M.: Public HMDS: modeling and understanding user behavior around public head-mounted displays. In: Proceedings of the 7th ACM International Symposium on Pervasive Displays. PerDis 2018. ACM, New York (2018). https://doi.org/10.1145/3205873.3205879. http://doi.acm.org/10.1145/3205873.3205879

  36. Majaranta, P., Aula, A., Räihä, K.J.: Effects of feedback on eye typing with a short dwell time. In: Proceedings of the 2004 Symposium on Eye Tracking Research & Applications. ETRA 2004. ACM, New York (2004). https://doi.org/10.1145/968363.968390. http://doi.acm.org/10.1145/968363.968390

  37. Majaranta, P., Bulling, A.: Eye tracking and eye-based human–computer interaction. In: Fairclough, S.H., Gilleade, K. (eds.) Advances in Physiological Computing. HIS, pp. 39–65. Springer, London (2014). https://doi.org/10.1007/978-1-4471-6392-3_3

    Chapter  Google Scholar 

  38. Majaranta, P., Räihä, K.J.: Twenty years of eye typing: systems and design issues. In: Proceedings of the 2002 Symposium on Eye Tracking Research & Applications. ETRA 2002. ACM, New York (2002). https://doi.org/10.1145/507072.507076. http://doi.acm.org/10.1145/507072.507076

  39. Rubin, P.: Review: oculus go, January 2018. https://www.wired.com/review/oculus-go/. Accessed 07 Feb 2019

  40. Scrocca, M., Ruaro, N., Occhiuto, D., Garzotto, F.: Jazzy: leveraging virtual reality layers for hand-eye coordination in users with amblyopia. In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. CHI EA 2018. ACM, New York (2018). https://doi.org/10.1145/3170427.3188618. http://doi.acm.org/10.1145/3170427.3188618

  41. Sibert, L.E., Jacob, R.J.K.: Evaluation of eye gaze interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI 2000. ACM, New York (2000). https://doi.org/10.1145/332040.332445. http://doi.acm.org/10.1145/332040.332445

  42. Sluganovic, I., Roeschlin, M., Rasmussen, K.B., Martinovic, I.: Using reflexive eye movements for fast challenge-response authentication. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. CCS 2016. ACM, New York (2016). https://doi.org/10.1145/2976749.2978311. http://doi.acm.org/10.1145/2976749.2978311

  43. Song, C., Wang, A., Ren, K., Xu, W.: eyeveri: a secure and usable approach for smartphone user authentication. In: IEEE International Conference on Computer Communication (INFOCOM 2016), San Francisco, California, pp. 1–9. April 2016

    Google Scholar 

  44. Summers, N.: Microsoft’s mixed reality hololens 2 headset is official, February 2019. https://www.engadget.com/2019/02/24/microsoft-hololens-2-announced/. Accessed 28 Feb 2019

  45. Vidal, M., Bulling, A., Gellersen, H.: Pursuits: spontaneous interaction with displays based on smooth pursuit eye movement and moving targets. In: Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing. UbiComp 2013. ACM, New York (2013). https://doi.org/10.1145/2493432.2493477. http://doi.acm.org/10.1145/2493432.2493477

  46. Yang, J.J., Horii, H., Thayer, A., Ballagas, R.: VR grabbers: ungrounded haptic retargeting for precision grabbing tools. In: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. UIST 2018. ACM, New York (2018). https://doi.org/10.1145/3242587.3242643. http://doi.acm.org/10.1145/3242587.3242643

  47. von Zezschwitz, E., De Luca, A., Brunkow, B., Hussmann, H.: Swipin: fast and secure pin-entry on smartphones. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. CHI 2015. ACM, New York (2015). https://doi.org/10.1145/2702123.2702212. http://doi.acm.org/10.1145/2702123.2702212

  48. von Zezschwitz, E., Dunphy, P., De Luca, A.: Patterns in the wild: a field study of the usability of pattern and pin-based authentication on mobile devices. In: Proceedings of the 15th International Conference on Human-computer Interaction with Mobile Devices and Services. MobileHCI 2013, pp. 261–270. ACM, New York (2013). https://doi.org/10.1145/2493190.2493231. http://doi.acm.org/10.1145/2493190.2493231

Download references

Acknowledgements

The contributions from the authors Mohamed Khamis and Daniel Buschek were supported, in part, by the Royal Society of Edinburgh (Award number 65040), and the Bavarian State Ministry of Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ceenu George .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

George, C., Buschek, D., Ngao, A., Khamis, M. (2020). GazeRoomLock: Using Gaze and Head-Pose to Improve the Usability and Observation Resistance of 3D Passwords in Virtual Reality. In: De Paolis, L., Bourdot, P. (eds) Augmented Reality, Virtual Reality, and Computer Graphics. AVR 2020. Lecture Notes in Computer Science(), vol 12242. Springer, Cham. https://doi.org/10.1007/978-3-030-58465-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58465-8_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58464-1

  • Online ISBN: 978-3-030-58465-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics