Personal and Ubiquitous Computing

, Volume 17, Issue 5, pp 825–834 | Cite as

Interactions between human–human multi-threaded dialogues and driving

  • Andrew L. Kun
  • Alexander Shyrokov
  • Peter A. Heeman
Original Article


In-car devices with speech user interfaces are proliferating. How can we build these interfaces such that they allow human–computer interactions with multiple devices to overlap in time, but without interfering with the driving task? We suggest that interface design can be inspired by the way people deal with this problem in human–human dialogues and propose discovering human dialogue behaviors of interest through experiments. In this paper, we discuss how to design an appropriate human–human dialogue scenario for such experiments. We also report on one human–human experiment, in terms of the dialogue behaviors found, and impact on the verbal tasks and on driving. We also offer design considerations based on the results of the study.


Speech user interfaces Multi-threaded dialogue Spoken task Driving simulator 



This work was funded by the NSF under grant IIS-0326496 and by the US Department of Justice under grants 2006DDBXK099, 2008DNBXK221, 2009D1BXK021, and 2010DDBXK226.


  1. 1.
    Horrey WJ, Wickens CD (2006) Examining the impact of cell phone conversations on driving using meta-analytic techniques. Hum Factors 48(1):196–205CrossRefGoogle Scholar
  2. 2.
    Kun AL, Paek T, Medenica Z (2007) The effect of speech interface accuracy on driving performance. In: Proceedings of Interspeech, Antwerp, BelgiumGoogle Scholar
  3. 3.
    Heeman PA, Yang F, Kun AL, Shyrokov A (2005) Conventions in human–human multi-threaded dialogues: a preliminary study. In: Proceedings of the international conference on intelligent user interfaces, San Diego, CAGoogle Scholar
  4. 4.
    Reuda-Domingo T, Lardelli-Claret P, de Luna-del-Castillo JD, Jimenez-Moleon JJ, Garcia-Martin M, Bueno-Cavanillas A (2004) The influence of passengers on the risk of the driver causing a car collision in Spain: analysis of collisions from 1990 to 1999. Accident Anal Prevent 36:481–489Google Scholar
  5. 5.
    Shyrokov A (2010) Human-human multi-threaded spoken dialogs in the presence of driving. Doctoral dissertation, University of New HampshireGoogle Scholar
  6. 6.
    Shyrokov A (2006) Setting up experiments to test a multi-threaded speech user interface. Technical report ECE.P54.2006.1, University of New HampshireGoogle Scholar
  7. 7.
    Toh SL, Yang F, Heeman PA (2006) An annotation scheme for agreement analysis. In: Proceedings of the 9th international conference on spoken language processing (ICSLP-06), Pittsburgh PAGoogle Scholar
  8. 8.
    Shyrokov A, Kun A, Heeman P (2007) Experimental modeling of human–human multi-threaded dialogues in the presence of a manual-visual task. In: Proceedings of the 8th SIGdial workshop on discourse and dialogue, Antwerp, BelgiumGoogle Scholar
  9. 9.
    Anderson AH, Bader M, Bard EC, Boyle E, Doherty G, Garrod S, Isard S, Kowtko J, McAllister J, Miller J, Sotillo C, Thompson H, Weinert R (1991) The HCRC map task corpus. Lang Speech 34(4):351–366Google Scholar
  10. 10.
    Villing J, Holtelius C, Larsson S, Lindstrom A, Seward A, Aberg N (2008) Interruption, resumption and domain switching in in-vehicle dialogue. In: Proceedings of GoTAL, 6th international conference on natural language processing, Gothenburg, SwedenGoogle Scholar
  11. 11.
    Lindstrom A, Villing J, Larsson S (2008) The effect of cognitive load on disfluencies during in-vehicle spoken dialogue. In: Proceedings of Interspeech, Brisbane, AustraliaGoogle Scholar
  12. 12.
    Villing J (2010) Now, Where Was I? Resumption strategies for an in-vehicle dialogue system. In: Proceedings of the 48th annual meeting of the association for computational linguistics, Uppsala, SwedenGoogle Scholar
  13. 13.
    Vollrath M (2007) Speech and driving-solution or problem? Intell Transp Syst 1:89–94Google Scholar
  14. 14.
    Wickens CD (2002) Multiple resources and performance prediction. Theor Issues Ergonom Sci 3(2):159–177CrossRefGoogle Scholar
  15. 15.
    Kun AL, Shyrokov A, Heeman PA (2010) Spoken tasks for human–human experiments: towards in-car speech user interfaces for multi-threaded dialogue. In: Proceedings of 2nd international conference on automotive user interfaces and interactive vehicular applications (AutomotiveUI ’10), Pittsburgh, PAGoogle Scholar
  16. 16.
    Strayer DL, Johnston WA (2001) Driven to distraction: dual-task studies of simulated driving and conversing on a cellular phone. Psychol Sci 12(6):462–466CrossRefGoogle Scholar
  17. 17.
    Tsimhoni O, Green P (1999) Visual demand of driving curves determined by visual occlusion. In: Proceedings of the vision in vehicles 8 conference, Boston, MAGoogle Scholar
  18. 18.
    Yang F, Heeman PA, Kun AL (2011) An investigation of interruptions and resumptions in multi-tasking dialogues. Comput Linguist 37:1CrossRefGoogle Scholar
  19. 19.
    Oviatt S, Darves C, Coulston R (2004) Toward adaptive conversational interfaces: modeling speech convergence with animated personas. Trans Comput Hum Interact 11:300–328CrossRefGoogle Scholar
  20. 20.
    Palinko O, Kun AL, Shyrokov A, Heeman P (2010) Estimating cognitive load using remote eye tracking in a driving simulator. In: Proceedings of the eye tracking research and applications conference, Austin, TXGoogle Scholar
  21. 21.
    Beatty J (1982) Task evoked pupillary responses, processing load and structure of processing resources. Psychol Bull 91(2):276–292CrossRefGoogle Scholar
  22. 22.
    Vergauwe E, Barrouillet P, Camos V (2010) Do mental processes share a domain-general resource? Psychol Sci 21(3):384–390CrossRefGoogle Scholar
  23. 23.
    Drews FA, Pasupathi M, Strayer DL (2008) Passenger and cell phone conversations in simulated driving. J Exp Psychol Appl 14(4):392–400Google Scholar

Copyright information

© Springer-Verlag London Limited 2012

Authors and Affiliations

  • Andrew L. Kun
    • 1
  • Alexander Shyrokov
    • 1
  • Peter A. Heeman
    • 2
  1. 1.Electrical and Computer Engineering DepartmentUniversity of New HampshireDurhamUSA
  2. 2.Biomedical Engineering Department, Center for Spoken Language UnderstandingOregon Health and Science UniversityBeavertonUSA

Personalised recommendations