Using Visual Cues to Leverage the Use of Speech Input in the Vehicle

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10809)

Abstract

Touch and speech input often exist side-by-side in multimodal systems. Speech input has a number of advantages over touch, which are especially relevant in safety critical environments such as driving. However, information on large screens tempts drivers to use touch input for interaction. They lack an effective trigger, which reminds them that speech input might be the better choice. This work investigates the efficacy of visual cues to leverage the use of speech input while driving. We conducted a driving simulator experiment with 45 participants that examined the influence of visual cues, task type, driving scenario, and audio signals on the driver’s choice of modality, glance behavior and subjective ratings. The results indicate that visual cues can effectively promote speech input, without increasing visual distraction, or restricting the driver’s freedom to choose. We propose that our results can be applied to other applications such as smartphones or smart home applications.

Keywords

Visual cues Prompts Speech input Triggers Persuasive 

References

  1. 1.
    Bradford, J.H., James, H.: The human factors of speech-based interfaces. ACM SIGCHI Bull. 27(2), 61–67 (1995)CrossRefGoogle Scholar
  2. 2.
    Cohen, J.: A power primer. Psychol. Bull. 112(1), 155–159 (1992)CrossRefGoogle Scholar
  3. 3.
    Dillard, J.P., Shen, L.: On the nature of reactance and its role in persuasive health communication. Commun. Monogr. 72(2), 144–168 (2005)CrossRefGoogle Scholar
  4. 4.
    Fogg, B.J.: A behavior model for persuasive design. In: Proceedings of the 4th International Conference on Persuasive Technology, p. 1. ACM Press, New York (2009)Google Scholar
  5. 5.
    Fogg, B.J.: Creating persuasive technologies: an eight-step design process. In: Proceedings of the 4th International Conference on Persuasive Technology, vol. 91, pp. 1–6 (2009)Google Scholar
  6. 6.
    Kamm, C.: User interfaces for voice applications. Proc. Nat. Acad. Sci. 92(22), 10031–10037 (1995)CrossRefGoogle Scholar
  7. 7.
    Miranda, B., Jere, C., Alharbi, O., Lakshmi, S., Khouja, Y., Chatterjee, S.: Examining the efficacy of a persuasive technology package in reducing texting and driving behavior. In: Berkovsky, S., Freyne, J. (eds.) PERSUASIVE 2013. LNCS, vol. 7822, pp. 137–148. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-37157-8_17CrossRefGoogle Scholar
  8. 8.
    Parush, A.: Speech-based interaction in multitask conditions: impact of prompt modality. Hum. Factors 47(3), 591–597 (2005)CrossRefGoogle Scholar
  9. 9.
    Reeves, L.M., Martin, J.-C., McTear, M., Raman, T.V., Stanney, K.M., Su, H., Wang, Q.Y., Lai, J., Larson, J.A., Oviatt, S., Balaji, T.S., Buisine, S., Collings, P., Cohen, P., Kraal, B.: Guidelines for multimodal user interface design. Commun. ACM 47(1), 57–59 (2004)CrossRefGoogle Scholar
  10. 10.
    Roider, F., Rümelin, S., Pfleging, B., Gross, T.: The effects of situational demands on gaze, speech and gesture input in the vehicle. In: Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 94–102. ACM Press, New York (2017)Google Scholar
  11. 11.
    Steindl, C., Jonas, E., Sittenthaler, S., Traut-Mattausch, E., Greenberg, J.: Understanding psychological reactance: new developments and findings. J. Psychol. 223(4), 205–214 (2015)Google Scholar
  12. 12.
    Strayer, D.L., Drews, F.A., Crouch, D.J.: A comparison of the cell phone driver and the drunk driver. Hum. Factors 48(2), 381–391 (2006)CrossRefGoogle Scholar
  13. 13.
    Teichner, W.H., Krebs, M.J.: Laws of visual choice reaction time. Psychol. Rev. 81(1), 75–98 (1974)CrossRefGoogle Scholar
  14. 14.
    Wickens, C.D., Sandry, D.L., Vidulich, M.: Compatibility and resource competition between modalities of input, central processing, and output. Hum. Factors 25(2), 227–248 (1983)CrossRefGoogle Scholar
  15. 15.
    Yankelovich, N.: How do users know what to say? Interactions 3(6), 32–43 (1996)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.BMW Group Research, New Technologies, InnovationsMunichGermany
  2. 2.Human-Computer Interaction GroupUniversity of BambergBambergGermany

Personalised recommendations