Why Human-Autonomy Teaming?

  • R. Jay Shively
  • Joel Lachter
  • Summer L. Brandt
  • Michael Matessa
  • Vernol Battiste
  • Walter W. Johnson
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 586)

Abstract

Automation has entered nearly every aspect of our lives, but it often remains hard to understand. Why is this? Automation is often brittle, requiring constant human oversight to assure it operates as intended. This oversight has become harder as automation has become more complicated. To resolve this problem, Human-Autonomy Teaming (HAT) has been proposed. HAT is based on advances in providing automation transparency, a method for giving insight into the reasoning behind automated recommendations and actions, along with advances in human automation communications (e.g., voice). These, in turn, permit more trust in the automation when appropriate, and less when not, allowing a more targeted supervision of automated functions. This paper proposes a framework for HAT, incorporating three key tenets: transparency, bi-directional communication, and operator directed authority. These tenets, along with more capable automation, represent a shift in human-automation relations.

Keywords

Human-Autonomy Teaming Automation Human factors 

Notes

Acknowledgments

We would like to acknowledge NASA’s Safe and Autonomous System Operations Project, which funded this research.

References

  1. 1.
  2. 2.
  3. 3.
    Lee, D.D.: Review of a pivotal human factors article: “humans and automation: use, misuse, disuse, abuse”. Hum. Factors 50, 404–410 (2008)CrossRefGoogle Scholar
  4. 4.
    Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39, 230–253 (1997)CrossRefGoogle Scholar
  5. 5.
    Parasuraman, R., Manzey, D.H.: Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52, 381–410 (2010)CrossRefGoogle Scholar
  6. 6.
    Billings, C.E.: Human-Centered Aviation Automation: Principles and Guidelines. NASA, Washington, DC (1996). NASA-TM-110381Google Scholar
  7. 7.
    Chen, J.Y.C., Barnes, M.J.: Human-Agent Teaming for Multi-Robot Control: A Literature Review. Army Research Lab Technical report, ARL-TR-6328 (2013)Google Scholar
  8. 8.
  9. 9.
    Christoffersen, K., Woods, D.D.: How to make automated systems team players. In: Advances in Human Performance and Cognitive Engineering Research, vol. 2, pp. 1–12. Emerald Group Publishing Limited (2002)Google Scholar
  10. 10.
    Wiener, E.L.: Cockpit automation. In: Wiener, E.L., Nagel, D.C. (eds.) Human Factors in Aviation, pp. 433–461. Academic Press Inc., New York (1989)Google Scholar
  11. 11.
    Onken, R.:. The cockpit assistant system CASSY as an on-board player in the ATM environment. In: Proceedings of First Air Traffic Management Research and Development Seminar, pp. 1–26 (1997)Google Scholar
  12. 12.
    Lyons, J.B., Saddler, G.G., Koltai, K., Battiste, H., Ho, N.T., Hoffmann, L.C., Smith, D., Johnson, W., Shively, R.: Shaping trust through transparent design: theoretical and experimental guidelines. Adv. Hum. Factors Robot. Unmanned Syst. 499, 127–136 (2017)CrossRefGoogle Scholar
  13. 13.
    Sadler, G., Battiste, H., Ho, N., Hoffmann, L., Johnson, W., Shively, R., Lyons, J., Smith, D.: Effects of transparency on pilot trust and agreement in the autonomous constrained flight planner. In: Digital Avionics Systems Conference (DASC) IEEE/AIAA 35th, pp. 1–9. IEEE (2016)Google Scholar
  14. 14.
    Lee, D.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46, 50–80 (2004)CrossRefGoogle Scholar
  15. 15.
    Lees, M.N., Lee, J.D.: The influence of distraction and driving context on driver response to imperfect collision warning systems. Ergonomics 50, 1264–1286 (2007)CrossRefGoogle Scholar
  16. 16.
    Hergeth, S., Lorenz, L., Vilimek, R., Krems, J.F.: keep your scanners peeled: gace behavior as a measure of automation trust during highly automated driving. Hum. Factors 58, 5–27 (2016)CrossRefGoogle Scholar
  17. 17.
    Endsley, M.R.: From here to autonomy: lessons learned from human-automation research. Hum. Factors 59, 5–27 (2017)CrossRefGoogle Scholar
  18. 18.
    Christoffersen, K., Woods, D.D.: How to make automated systems team players. In: Advances in Human Performance and Cognitive Engineering Research, pp. 1–12. Emerald Group Publishing Limited (2002)Google Scholar
  19. 19.
    Alberts, D.S., Garstka, J.J., Hayes, R.E., Signori, D.A.: Understanding Information Age Warfare. Command Control Research Program, Washington, DC (2001)Google Scholar
  20. 20.
    Chen, J.Y.C., Barnes, M.J., Harper-Sciarini, M.: Supervisory control of multiple robots: human performance issues and user interface design. IEEE Trans. Syst. Man Cybern.–Part C: Appl. Rev. 41, 435–454 (2011)CrossRefGoogle Scholar
  21. 21.
    Lyons, J.B.: Being transparent about transparency: a model for human robot interaction. In: AIAA Spring Symposium Series (2013)Google Scholar
  22. 22.
    Miller, C.A., Parasuraman, R.: Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control. Hum. Factors 49, 57–75 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG (outside the USA) 2018

Authors and Affiliations

  • R. Jay Shively
    • 1
  • Joel Lachter
    • 1
  • Summer L. Brandt
    • 2
  • Michael Matessa
    • 3
  • Vernol Battiste
    • 2
  • Walter W. Johnson
    • 1
  1. 1.NASA Ames Research CenterMoffett FieldUSA
  2. 2.NASA Ames Research CenterSan José State UniversityMoffett FieldUSA
  3. 3.Rockwell CollinsCedar RapidsUSA

Personalised recommendations