User Evaluation of Hand Gestures for Designing an Intelligent In-Vehicle Interface

  • Hessam Jahani
  • Hasan J. Alyamani
  • Manolya Kavakli
  • Arindam Dey
  • Mark Billinghurst
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10243)

Abstract

Driving a car is a high cognitive-load task requiring full attention behind the wheel. Intelligent navigation, transportation, and in-vehicle interfaces have introduced a safer and less demanding driving experience. However, there is still a gap for the existing interaction systems to satisfy the requirements of actual user experience. Hand gesture as an interaction medium, is natural and less visually demanding while driving. This paper aims to conduct a user-study with 79 participants to validate mid-air gestures for 18 major in-vehicle secondary tasks. We have demonstrated a detailed analysis on 900 mid-air gestures investigating preferences of gestures for in-vehicle tasks, their physical affordance, and driving errors. The outcomes demonstrate that employment of mid-air gestures reduces driving errors by up to 50% compared to traditional air-conditioning control. Results can be used for the development of vision-based in-vehicle gestural interfaces.

Keywords

Human computer interaction Gesture recognition In-vehicle interface Human-centred design User evaluation 

References

  1. 1.
    Victor, T., Rothoff, M., Coelingh, E., Ödblom, A., Burgdorf, K.: When autonomous vehicles are introduced on a larger scale in the road transport system: the Drive Me project. In: Watzenig, D., Horn, M. (eds.) Automated Driving, pp. 541–546. Springer, Cham (2017). doi:10.1007/978-3-319-31895-0_24 CrossRefGoogle Scholar
  2. 2.
    Drews, F.A., Yazdani, H., Godfrey, C.N., Cooper, J.M., Strayer, D.L.: Text messaging during simulated driving. Hum. Factors: J. Hum. Factors Ergon. Soc. 51, 762–770 (2009)CrossRefGoogle Scholar
  3. 3.
    Gregoriades, A., Sutcliffe, A., Papageorgiou, G., Louvieris, P.: Human-centered safety analysis of prospective road designs. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 40(2), 236–250 (2010)CrossRefGoogle Scholar
  4. 4.
    Döring, T., Kern, D., Marshall, P., Pfeiffer, M., Schöning, J., Gruhn, V., Schmidt, A.: Gestural interaction on the steering wheel: reducing the visual demand. ACM (2011)Google Scholar
  5. 5.
    Fariman, H.J., Alyamani, H.J., Kavakli, M., Hamey, L.: Designing a user-defined gesture vocabulary for an in-vehicle climate control system. In: Proceedings of 28th Australian Conference on Computer-Human Interaction, Launceston, Tasmania, Australia. ACM (2016)Google Scholar
  6. 6.
    Ruikar, M.: National statistics of road traffic accidents in India. J. Orthop. Traumatol. Rehabil. 6(1), 1 (2013)CrossRefGoogle Scholar
  7. 7.
    Bonin-Font, F., Ortiz, A., Oliver, G.: Visual navigation for mobile robots: a survey. J. Intell. Rob. Syst. 53(3), 263 (2008)CrossRefGoogle Scholar
  8. 8.
    Lin, S.-P., Maxemchuk, N.F.: The fail-safe operation of collaborative driving systems. J. Intell. Transp. Syst. 20(1), 88–101 (2016)CrossRefGoogle Scholar
  9. 9.
    Velez, G., Otaegui, O.: Embedding vision-based advanced driver assistance systems: a survey. IET Intell. Transp. Syst. 11(3), 103–112 (2016)CrossRefGoogle Scholar
  10. 10.
    Normark, C.J., Tretten, P., Gärling, A.: Do redundant head-up and head-down display configurations cause distractions. (2009)Google Scholar
  11. 11.
    Metz, B., Landau, A., Just, M.: Frequency of secondary tasks in driving–results from naturalistic driving data. Saf. Sci. 68, 195–203 (2014)CrossRefGoogle Scholar
  12. 12.
    Chen, S., Epps, J.: Using task-induced pupil diameter and blink rate to infer cognitive load. Hum.-Comput. Interact. 29(4), 390–413 (2014)CrossRefGoogle Scholar
  13. 13.
    Hartson, R.: Cognitive, physical, sensory, and functional affordances in interaction design. Behav. Inf. Technol. 22(5), 315–338 (2003)CrossRefGoogle Scholar
  14. 14.
    Kaptelinin, V., Nardi, B.: Affordances in HCI: toward a mediated action perspective. ACM (2012)Google Scholar
  15. 15.
    Merrill, D.J.: FlexiGesture: a sensor-rich real-time adaptive gesture and affordance learning platform for electronic music control. Massachusetts Institute of Technology (2004)Google Scholar
  16. 16.
    Norman, D.A.: Affordance, conventions, and design. Interactions 6(3), 38–43 (1999)CrossRefGoogle Scholar
  17. 17.
    Riedl, R., Davis, F.D., Banker, R., Kenning, P.H.: Neuroscience in Information Systems Research: Applying Knowledge of Brain Functionality Without Neuroscience Tools. Springer, Heidelberg (2017)CrossRefGoogle Scholar
  18. 18.
    Jahani Fariman, H., Ahmad, S.A., Hamiruce Marhaban, M., Ali Jan Ghasab, M., Chappell, P.H.: Simple and computationally efficient movement classification approach for EMG-controlled prosthetic hand: ANFIS vs. artificial neural network. Intell. Autom. Soft Comput. 21, 1–15 (2015). Taylor and FrancisGoogle Scholar
  19. 19.
    Kucukyildiz, G., Ocak, H., Karakaya, S., Sayli, O.: Design and implementation of a multi sensor based brain computer interface for a robotic wheelchair. J. Intell. Rob. Syst. 1–17 (2017)Google Scholar
  20. 20.
    Boyali, A., Hashimoto, N.: Spectral collaborative representation based classification for hand gestures recognition on electromyography signals. Biomed. Signal Process. Control 24, 11–18 (2016)CrossRefGoogle Scholar
  21. 21.
    Rodger, J.A.: Reinforcing inspiration for technology acceptance: improving memory and software training results through neuro-physiological performance. Comput. Hum. Behav. 38, 174–184 (2014)CrossRefGoogle Scholar
  22. 22.
    Lin, Y., Breugelmans, J., Iversen, M., Schmidt, D.: An adaptive interface design (AID) for enhanced computer accessibility and rehabilitation. Int. J. Hum Comput Stud. 98, 14–23 (2017)CrossRefGoogle Scholar
  23. 23.
    Riener, A.: Gestural interaction in vehicular applications. Computer 4, 42–47 (2012)CrossRefGoogle Scholar
  24. 24.
    Jæger, M.G., Skov, M.B. Thomassen, N.G. You can touch, but you can’t look: interacting with in-vehicle systems. ACM (2008)Google Scholar
  25. 25.
    Jamson, A.H., Westerman, S.J., Hockey, G.R.J., Carsten, O.M.: Speech-based e-mail and driver behavior: effects of an in-vehicle message system interface. Hum. Factors: J. Hum. Factors Ergon. Soc. 46(4), 625–639 (2004)CrossRefGoogle Scholar
  26. 26.
    Akl, A., Valaee, S.: Accelerometer-based gesture recognition via dynamic-time warping, affinity propagation, & compressive sensing. IEEE (2010)Google Scholar
  27. 27.
    Riener, A., Ferscha, A., Bachmair, F., Hagmüller, P., Lemme, A., Muttenthaler, D., Pühringer, D., Rogner, H., Tappe, A., Weger, F.: Standardization of the in-car gesture interaction space. ACM (2013)Google Scholar
  28. 28.
    Wobbrock, J.O., Morris, M.R., Wilson, A.D.: User-defined gestures for surface computing. ACM (2009)Google Scholar
  29. 29.
    Ruiz, J., Li, Y., Lank, E.: User-defined motion gestures for mobile interaction. ACM (2011)Google Scholar
  30. 30.
    Obaid, M., Häring, M., Kistler, F., Bühling, R., André, E.: User-defined body gestures for navigational control of a humanoid robot. In: Ge, S.S., Khatib, O., Cabibihan, J.-J., Simmons, R., Williams, M.-A. (eds.) ICSR 2012. LNCS, vol. 7621, pp. 367–377. Springer, Heidelberg (2012). doi:10.1007/978-3-642-34103-8_37 CrossRefGoogle Scholar
  31. 31.
    Silpasuwanchai, C., Ren, X.: Designing concurrent full-body gestures for intense gameplay. Int. J. Hum. Comput. Stud. 80, 1–13 (2015)CrossRefGoogle Scholar
  32. 32.
    Ha, T., Billinghurst, M., Woo, W.: An interactive 3D movement path manipulation method in an augmented reality environment. Interact. Comput. 24(1), 10–24 (2012)CrossRefGoogle Scholar
  33. 33.
    Nielsen, M., Störring, M., Moeslund, T.B., Granum, E.: A procedure for developing intuitive and ergonomic gesture interfaces for HCI. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS, vol. 2915, pp. 409–420. Springer, Heidelberg (2004). doi:10.1007/978-3-540-24598-8_38 CrossRefGoogle Scholar
  34. 34.
    Kühnel, C., Westermann, T., Hemmert, F., Kratz, S., Müller, A., Möller, S.: I’m home: defining and evaluating a gesture set for smart-home control. Int. J. Hum. Comput. Stud. 69(11), 693–704 (2011)CrossRefGoogle Scholar
  35. 35.
    Seyed, T., Burns, C., Costa Sousa, M., Maurer, F., Tang, A.: Eliciting usable gestures for multi-display environments. ACM (2012)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Hessam Jahani
    • 1
  • Hasan J. Alyamani
    • 1
  • Manolya Kavakli
    • 1
  • Arindam Dey
    • 2
  • Mark Billinghurst
    • 2
  1. 1.VISOR Research Group, VR Lab, Department of ComputingMacquarie UniversitySydneyAustralia
  2. 2.Empathic Computing LabUniversity of South AustraliaAdelaideAustralia

Personalised recommendations