Advertisement

Human-Agent Teaming for Effective Multirobot Management: Effects of Agent Transparency

  • Michael J. Barnes
  • Jessie Y. C. ChenEmail author
  • Julia L. Wright
  • Kimberly Stowers
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9736)

Abstract

The U.S. Army Research Laboratory is engaged in a multi-year program focusing on the human role in supervising autonomous vehicles. We discuss this research with regard to patterns of human/intelligent agent (IA) interrelationships, and explore the dynamics of these patterns in terms of supervising multiple autonomous vehicles. The first design pattern focuses on a human operator controlling multiple autonomous vehicles via a single IA. The second design pattern involves multiple intelligent systems including (a) human operator, (b) IA-asset manager, (c) IA-planning manager, (d) IA-mission monitor, and (e) multiple autonomous vehicles. Both scenarios require a single operator to control multiple heterogeneous autonomous vehicles, and yet the complexity of both the mission variables and the relations among the autonomous vehicles makes efficient operations by a single operator difficult at best. Key findings of two recent research programs are summarized with an emphasis on their implications for developing future systems with similar design patterns. Our conclusions stress the importance of operator situation awareness, not only of the immediate environment, but also of the IA’s intent, reasoning and predicted outcomes.

Keywords

Intelligent agents Transparency Patterns of human–agent interaction Human factors Supervisory control 

Notes

Acknowledgements

This research was supported by the U.S. Department of Defense Autonomy Research Pilot Initiative, under the Intelligent Multi-UxV Planner with Adaptive Collaborative/Control Technologies (IMPACT) project. The authors wish to thank Olivia Newton, Ryan Wohleber, Nicholas Kasdaglis, Michael Rupp, Daniel Barber, Jonathan Harris, Gloria Calhoun, and Mark Draper for their contribution to this project.

References

  1. 1.
    Barnes, M.J., Chen, J.Y., Jentsch, F., Redden, E.S.: Designing effective soldier-robot teams in complex environments: training, interfaces, and individual differences. In: Harris, D. (ed.) HCII 2011. LNCS, vol. 6781, pp. 484–493. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  2. 2.
    U.S. Department of Defense, Briefing on Autonomy Initiatives (2012)Google Scholar
  3. 3.
    U.S. Defense Science Board: Role of Autonomy in DoD Systems. Office of the Undersecretary of Defense, Washington, D.C. (2012)Google Scholar
  4. 4.
    Endsley, M.: Autonomous Horizons: System Autonomy in the Air Force – A Path to the Future (Volume I: Human Autonomy Teaming). U.S. Department of the Air Force, Washington (2015)Google Scholar
  5. 5.
    Chen, J.Y.C., Barnes, M.J.: Human-agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum. Mach. Syst. 44, 13–29 (2014)CrossRefGoogle Scholar
  6. 6.
    Lewis, M.: Human interaction with multiple remote robots. Rev. Hum. Factors Ergon. 9, 131–174 (2013)CrossRefGoogle Scholar
  7. 7.
    Schulte, A., Donath, D., Lange, D.: Design patterns for human-cognitive agent teaming. In: Conference on Engineering Psychology & Cognitive Ergonomics. Lecture Notes on Computer Science, Springer (2016)Google Scholar
  8. 8.
    Chen, J.Y.C., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.J.: Situation awareness–based agent transparency. Technical report ARL–TR–6905, U.S. Army Research Laboratory, Aberdeen Proving Ground, Maryland (2014)Google Scholar
  9. 9.
    Meyer, J., Lee, J.: Trust, Reliance, Compliance. In: Lee, J., Kirlik, A. (eds.) The Oxford Handbook of Cognitive Engineering. Oxford University Press, Oxford (2013)Google Scholar
  10. 10.
    Schaefer, K.E., Billings, D.R., Szalma, J.L., Adams, J., Sanders, T.L., Chen, J.Y.C., Hancock, P.A.: A meta-analysis of factors influencing the development of trust in automation: implications for human-robot interaction. Technical report ARL-TR-6984, U.S. Army Research Laboratory, Aberdeen Proving Ground, Maryland (2014)Google Scholar
  11. 11.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46, 50–80 (2004)CrossRefGoogle Scholar
  12. 12.
    Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30, 286–297 (2000)CrossRefGoogle Scholar
  13. 13.
    Lee, J.D.: Trust, trustworthiness, and trustability. In: The Workshop on Human-Machine Trust Robust Autonomous Systems, 31 Jan. 2012, Ocala, FL, USA (2012)Google Scholar
  14. 14.
    Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors 37, 32–64 (1995)CrossRefGoogle Scholar
  15. 15.
    Endsley, M.R.: Situation awareness misconceptions and misunderstandings. J. Cognit. Eng. Decis. Making 9, 4–32 (2015)CrossRefGoogle Scholar
  16. 16.
    Chen, J.Y.C., Barnes, M.J.: Supervisory control of multiple robots: effects of imperfect automation and individual differences. Hum. Factors 54, 157–174 (2012)CrossRefGoogle Scholar
  17. 17.
    Chen, J.Y.C., Barnes, M.J.: Supervisory control of multiple robots in dynamic tasking environments. Ergonomics 55, 1043–1058 (2012)CrossRefGoogle Scholar
  18. 18.
    Wright, J.L., Chen, J.Y.C., Quinn, S.A., Barnes, M.J.: The effects of level of autonomy on human–agent teaming for multi–robot control and local security maintenance. Technical report, ARL–TR–6724, U.S. Army Research Laboratory, Aberdeen Proving Ground, Maryland (2013)Google Scholar
  19. 19.
    Wright, J.L., Chen, J.Y.C., Barnes, M.J., Hancock, P.A.: The effect of agent reasoning transparency on automation bias: an analysis of performance and decision time. Technical report, U.S. Army Research Laboratory, Aberdeen Proving Ground, Maryland (in press)Google Scholar
  20. 20.
    U.S. Department of Defense – Research & Engineering Enterprise.: Autonomy Research Pilot Initiative. http://www.acq.osd.mil/chieftechnologist/arpi.html
  21. 21.
    Draper, M.: Realizing Autonomy via Intelligent Adaptive Hybrid Control: Adaptable Autonomy for Achieving UxV RSTA Team Decision Superiority, Yearly report. U.S. Air Force Research Laboratory, Dayton, OH (2013)Google Scholar
  22. 22.
    Behymer, K.J., Mersch, E.M., Ruff, H.A., Calhoun, G.L., Spriggs, S.E.: Unmanned vehicle plan comparison visualization for effective human-autonomy teaming. In: Proceedings of the 6th Applied Human Factors and Ergonomics International Conference Las Vegas, NV (2015)Google Scholar
  23. 23.
    Mercado, J.E., Rupp, M., Chen, J.Y.C., Barber, D., Procci, K., Barnes, M.J.: Intelligent agent transparency in human-agent teaming for Multi-UxV management. Hum. Factors 58(3), 401–415 (2016)CrossRefGoogle Scholar
  24. 24.
    Stowers K., Chen J.Y.C., Kasdaglis N., Newton O., Rupp, M., Barnes M.: Effects of situation awareness-based agent transparency information on human agent teaming for multi-UxV management. Technical report, U.S. Army Research Laboratory, Aberdeen Proving Ground, Maryland (2016)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Michael J. Barnes
    • 1
  • Jessie Y. C. Chen
    • 1
    Email author
  • Julia L. Wright
    • 1
  • Kimberly Stowers
    • 2
  1. 1.U.S. Army Research LaboratoryAberdeen Proving GroundAberdeenUSA
  2. 2.University of Central FloridaOrlandoUSA

Personalised recommendations