Paradigm Development for Identifying and Validating Indicators of Trust in Automation in the Operational Environment of Human Automation Integration

  • Kim DrnecEmail author
  • Jason S. Metcalfe
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9744)


Calibrated trust in an automation is a key factor supporting full integration of the human user into human automation integrated systems. True integration is a requirement if system performance is to meet expectations. Trust in automation (TiA) has been studied using surveys, but thus far no valid, objective indicators of TiA exist. Further, these studies have been conducted in tightly controlled laboratory environments and therefore do not necessarily translate into real world applications that might improve joint system performance. Through a literature review, constraints on an operational paradigm aimed at developing indicators of TiA were established. Our goal in this paper was to develop an operational paradigm designed to develop valid TiA indicators using methods from human factors and cognitive neuroscience. The operational environment chosen was driving automation because most adults are familiar with the task and its consequent structure and therefore required little training. Initial behavioral and survey data confirm that the design constraints were met. We therefore believe that our paradigm provides a valid means of performing operational experiments aimed at further understanding TiA and its psychophysiological underpinnings.


Trust in automation Operational paradigm Driving automation Human automation integrated systems 



This research was supported by the Office of the Secretary of Defense Autonomy Research Pilot Initiative program MIPR DWAM31168, and in part by an appointment to the U.S. Army Research Postdoctoral Fellowship Program administered by the Oak Ridge Associated Universities through a cooperative agreement with the U.S. Army Research Laboratory. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911-NF-12-2-0019. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. We would also like to thank Dr. Justin Brooks and Dr. Javier Garcia for their advice.


  1. 1.
    Muir, B.M.: Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37, 1905–1922 (1994)CrossRefGoogle Scholar
  2. 2.
    Muir, B.M.: Operators’ trust in and percentage of time spent using the automatic controllers in a supervisory process control task. Doctoral, University of Tornonto (1989)Google Scholar
  3. 3.
    Lee, J., Moray, N.: Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 1243–1270 (1992)CrossRefGoogle Scholar
  4. 4.
    Prinzel, L.J., Freeman, F.G., Scerbo, M.W., Mikulka, P.J., Pope, A.T.: A closed-loop system for examining psychophysiological measures for adaptive task allocation. Int. J. Aviat. Psychol. 10, 393–410 (2000)CrossRefGoogle Scholar
  5. 5.
    Scerbo, M.: Adaptive automation. In: Neuroergonomics: The Brain at Work, pp. 239–252 (2006)Google Scholar
  6. 6.
    Borum, R.: The science of interpersonal trust (2010). Corritore, L., Kracher, B., Wiedenbeck, S.: On-line trust: concepts, evolving themes, a model. Int. J. Hum.-Comput. Stud. 58, 737–758 (2003)Google Scholar
  7. 7.
    Lee, J.D., Moray, N.: Trust, self-confidence, and operators’ adaptation automation. Int. J. Hum.-Comput. Stud. 40, 153–184 (1994)CrossRefGoogle Scholar
  8. 8.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors: J. Hum. Factors Ergon. Soc. 46, 50–80 (2004)CrossRefGoogle Scholar
  9. 9.
    Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 31 (1996)CrossRefGoogle Scholar
  10. 10.
    Merritt, S.M., Ilgen, D.R.: Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Hum. Factors: J. Hum. Factors Ergon. Soc. 50, 194–210 (2008)CrossRefGoogle Scholar
  11. 11.
    Wickens, C.D., Dixon, S.R.: The benefits of imperfect diagnostic automation: a synthesis of the literature. Theor. Issues Ergon. Sci. 8, 201–212 (2007)CrossRefGoogle Scholar
  12. 12.
    Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. 30, 10 (2000)CrossRefGoogle Scholar
  13. 13.
    Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A.: Misuse and disuse of automated aids. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, p. 339 (1999)Google Scholar
  14. 14.
    Wickens, C.D.: Imperfect and unreliable automation and its implications for attention allocation, information access and situation awareness (2000)Google Scholar
  15. 15.
    Cummings, M.L., Mastracchio, C., Thornburg, K.M., Mkrtchyan, A.: Boredom and distraction in multiple unmanned vehicle supervisory control. Interact. Comput. 25, 34–47 (2013)Google Scholar
  16. 16.
    Basten, U., Biele, G., Heekeren, H.R., Fiebach, C.J.: How the brain integrates costs and benefits during decision making. Proc. Natl. Acad. Sci. 107, 21767–21772 (2010)CrossRefGoogle Scholar
  17. 17.
    Davis, C.E., Hauf, J.D., Wu, D.Q., Everhart, D.E.: Brain function with complex decision making using electroencephalography. Int. J. Psychophysiol. 79, 175–183 (2011)CrossRefGoogle Scholar
  18. 18.
    Zhou, J., Sun, J., Chen, F., Wang, Y., Taib, R., Khawaji, A., et al.: Measurable decision making with GSR and pupillary analysis for intelligent user interface. ACM Trans. Comput. Hum. Interact. (ToCHI) 21, 33 (2015)Google Scholar
  19. 19.
    Glaholt, M.G., Reingold, E.M.: Eye movement monitoring as a process tracing methodology in decision making research. J. Neurosci. Psychol. Econ. 4, 125 (2011)CrossRefGoogle Scholar
  20. 20.
    Gidlöf, K. et al.: Using eye tracking to trace a cognitive process: Gaze behaviour during decision making in a natural environment. J. Eye Mov. Res. 6(1), 1–14 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Army Research LaboratoryAberdeenUSA

Personalised recommendations