AHFE 2017: Advances in Neuroergonomics and Cognitive Engineering pp 3-11 | Cite as
Why Human-Autonomy Teaming?
Abstract
Automation has entered nearly every aspect of our lives, but it often remains hard to understand. Why is this? Automation is often brittle, requiring constant human oversight to assure it operates as intended. This oversight has become harder as automation has become more complicated. To resolve this problem, Human-Autonomy Teaming (HAT) has been proposed. HAT is based on advances in providing automation transparency, a method for giving insight into the reasoning behind automated recommendations and actions, along with advances in human automation communications (e.g., voice). These, in turn, permit more trust in the automation when appropriate, and less when not, allowing a more targeted supervision of automated functions. This paper proposes a framework for HAT, incorporating three key tenets: transparency, bi-directional communication, and operator directed authority. These tenets, along with more capable automation, represent a shift in human-automation relations.
Keywords
Human-Autonomy Teaming Automation Human factorsNotes
Acknowledgments
We would like to acknowledge NASA’s Safe and Autonomous System Operations Project, which funded this research.
References
- 1.
- 2.
- 3.Lee, D.D.: Review of a pivotal human factors article: “humans and automation: use, misuse, disuse, abuse”. Hum. Factors 50, 404–410 (2008)CrossRefGoogle Scholar
- 4.Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39, 230–253 (1997)CrossRefGoogle Scholar
- 5.Parasuraman, R., Manzey, D.H.: Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52, 381–410 (2010)CrossRefGoogle Scholar
- 6.Billings, C.E.: Human-Centered Aviation Automation: Principles and Guidelines. NASA, Washington, DC (1996). NASA-TM-110381Google Scholar
- 7.Chen, J.Y.C., Barnes, M.J.: Human-Agent Teaming for Multi-Robot Control: A Literature Review. Army Research Lab Technical report, ARL-TR-6328 (2013)Google Scholar
- 8.
- 9.Christoffersen, K., Woods, D.D.: How to make automated systems team players. In: Advances in Human Performance and Cognitive Engineering Research, vol. 2, pp. 1–12. Emerald Group Publishing Limited (2002)Google Scholar
- 10.Wiener, E.L.: Cockpit automation. In: Wiener, E.L., Nagel, D.C. (eds.) Human Factors in Aviation, pp. 433–461. Academic Press Inc., New York (1989)Google Scholar
- 11.Onken, R.:. The cockpit assistant system CASSY as an on-board player in the ATM environment. In: Proceedings of First Air Traffic Management Research and Development Seminar, pp. 1–26 (1997)Google Scholar
- 12.Lyons, J.B., Saddler, G.G., Koltai, K., Battiste, H., Ho, N.T., Hoffmann, L.C., Smith, D., Johnson, W., Shively, R.: Shaping trust through transparent design: theoretical and experimental guidelines. Adv. Hum. Factors Robot. Unmanned Syst. 499, 127–136 (2017)CrossRefGoogle Scholar
- 13.Sadler, G., Battiste, H., Ho, N., Hoffmann, L., Johnson, W., Shively, R., Lyons, J., Smith, D.: Effects of transparency on pilot trust and agreement in the autonomous constrained flight planner. In: Digital Avionics Systems Conference (DASC) IEEE/AIAA 35th, pp. 1–9. IEEE (2016)Google Scholar
- 14.Lee, D.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46, 50–80 (2004)CrossRefGoogle Scholar
- 15.Lees, M.N., Lee, J.D.: The influence of distraction and driving context on driver response to imperfect collision warning systems. Ergonomics 50, 1264–1286 (2007)CrossRefGoogle Scholar
- 16.Hergeth, S., Lorenz, L., Vilimek, R., Krems, J.F.: keep your scanners peeled: gace behavior as a measure of automation trust during highly automated driving. Hum. Factors 58, 5–27 (2016)CrossRefGoogle Scholar
- 17.Endsley, M.R.: From here to autonomy: lessons learned from human-automation research. Hum. Factors 59, 5–27 (2017)CrossRefGoogle Scholar
- 18.Christoffersen, K., Woods, D.D.: How to make automated systems team players. In: Advances in Human Performance and Cognitive Engineering Research, pp. 1–12. Emerald Group Publishing Limited (2002)Google Scholar
- 19.Alberts, D.S., Garstka, J.J., Hayes, R.E., Signori, D.A.: Understanding Information Age Warfare. Command Control Research Program, Washington, DC (2001)Google Scholar
- 20.Chen, J.Y.C., Barnes, M.J., Harper-Sciarini, M.: Supervisory control of multiple robots: human performance issues and user interface design. IEEE Trans. Syst. Man Cybern.–Part C: Appl. Rev. 41, 435–454 (2011)CrossRefGoogle Scholar
- 21.Lyons, J.B.: Being transparent about transparency: a model for human robot interaction. In: AIAA Spring Symposium Series (2013)Google Scholar
- 22.Miller, C.A., Parasuraman, R.: Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control. Hum. Factors 49, 57–75 (2007)CrossRefGoogle Scholar