To What Extent May Assistance Systems Correct and Prevent ‘Erroneous’ Behaviour of the Driver?

  • Toshiyuki InagakiEmail author
Conference paper


An error in situational recognition may occur while driving a car, and the error can sometimes result in an ‘erroneous’ behaviour of the driver. Whether the driver assistance system can cope with such a circumstance depends on to what extent the authority is given to the system. This paper discusses the need of machine-initiated authority trading from the driver to the assistance system for assuring driver safety. A theoretical framework is also given to describe and analyze the driver’s overtrust in and overreliance on such a driver assistance system.


Driver assistance systems Human-centered automation Authority and responsibility Overtrust Overreliance 


  1. 1.
    Billings CE (1997) Aviation automation—the search for a human-centered approach. LEA, MahwahGoogle Scholar
  2. 2.
    Cacciabue PC (2004) Guide to applying human factors methods: human error and accident management in safety critical systems. Springer, BerlinGoogle Scholar
  3. 3.
    Convention on Road Traffic (1968) 1993 version & amendments in 2006Google Scholar
  4. 4.
    Endsley MR, Kiris EO (1995) The out-of-the-loop performance problem and the level of control in automation. Hum Factors 37(2):3181–3194CrossRefGoogle Scholar
  5. 5.
    Hollnagel E, Bye A (2000) Principles for modeling function allocation. Int J Hum–Comput Stud 52:253–265CrossRefGoogle Scholar
  6. 6.
    Inagaki T (2006) Design of human–machine interactions in light of domain-dependence of human-centered automation. Cogn Tech Work 8(3):161–167MathSciNetCrossRefGoogle Scholar
  7. 7.
    Inagaki T (2008) Smart collaborations between humans and machines based on mutual understanding. Annu Rev Control 32:253–261CrossRefGoogle Scholar
  8. 8.
    Inagaki T (2010) Traffic systems as joint cognitive systems: issues to be solved for realizing human-technology coagency. Cogn Tech Work. 12(2):153–162 Google Scholar
  9. 9.
    Inagaki T, Stahre J (2004) Human supervision and control in engineering and music: similarities, dissimilarities, and their implications. Proc IEEE 92(4):589–600CrossRefGoogle Scholar
  10. 10.
    Inagaki T, Sheridan TB (2008) Authority and responsibility in human–machine systems: Is machine-initiated trading of authority permissible in the human-centered automation framework? In: Proceedings of applied human factors and ergonomics 2008 (CD-ROM) 10ppGoogle Scholar
  11. 11.
    Inagaki T, Itoh M, Nagai Y (2007) Support by warning or by action: Which is appropriate under mismatches between driver intent and traffic conditions? IEICE Trans Fundam E90-A(11):264–272Google Scholar
  12. 12.
    Inagaki T, Itoh M, Nagai Y (2008) Driver support functions under resource-limited situations. J Mech Syst Transportation Logistics 1(2):213–222CrossRefGoogle Scholar
  13. 13.
    Klein G (1993) A recognition-primed decision (RPD) model of rapid decision making. In: Klein G et al (eds) Decision making in action. Ablex, New York, pp 138–147Google Scholar
  14. 14.
    Lee JD, Moray N (1992) Trust, control strategies and allocation of function in human machine systems. Ergonomics 35(10):1243–1270CrossRefGoogle Scholar
  15. 15.
    Meyer J (2001) Effects of warning validity and proximity on responses to warnings. Hum Factors 43(4):563–572CrossRefGoogle Scholar
  16. 16.
    MLIT (2007) ASV; the bridge to an accident-free society. Ministry of Land, Infrastructure and Transport, Government of JapanGoogle Scholar
  17. 17.
    Mosier K, Skitka LJ, Heers S, Burdick M (1998) Automation bias: decision making and performance in high-tech cockpits. Int J Aviation Psychol 8:47–63CrossRefGoogle Scholar
  18. 18.
    Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors 39(2):230–253CrossRefGoogle Scholar
  19. 19.
    Parasuraman R, Molloy R, Singh IL (1993) Performance consequences of automation-induced ‘complacency’. Int J Aviation Psychol 3(1):1–23CrossRefGoogle Scholar
  20. 20.
    Sarter NB, Woods DD (1995) How in the world did we ever get into that mode? Mode error and awareness in supervisory control. Hum Factors 37(1):5–19CrossRefGoogle Scholar
  21. 21.
    Sarter NB, Woods DD, Billings CE (1997) Automation surprises. In: Salvendy G (ed) Handbook of human factors and ergonomics, 2nd edn. Wiley, New York, pp 1926–1943Google Scholar
  22. 22.
    Sheridan TB, Parasuraman R (2005) Human–automation interaction. In: Nickerson RS (ed) Reviews of human factors and ergonomics, vol 1. Human Factors and Ergonomics Society, Santa Monica, pp 89–129Google Scholar
  23. 23.
    Wickens CD (1994) Designing for situation awareness and trust in automation. In: Proceedings of IFAC integrated systems engineering, pp 77–82Google Scholar

Copyright information

© Springer-Verlag Italia Srl 2011

Authors and Affiliations

  1. 1.Department of Risk EngineeringUniversity of TsukubaTsukubaJapan

Personalised recommendations