Advertisement

Transparency Communication for Machine Learning in Human-Automation Interaction

  • David V. Pynadath
  • Michael J. Barnes
  • Ning Wang
  • Jessie Y. C. Chen
Chapter
Part of the Human–Computer Interaction Series book series (HCIS)

Abstract

Technological advances offer the promise of autonomous systems to form human-machine teams that are more capable than their individual members. Understanding the inner workings of the autonomous systems, especially as machine-learning (ML) methods are being widely applied to the design of such systems, has become increasingly challenging for the humans working with them. The “black-box” nature of quantitative ML approaches poses an impediment to people’s situation awareness (SA) of these ML-based systems, often resulting in either disuse or over-reliance of autonomous systems employing such algorithms. Research in human-automation interaction has shown that transparency communication can improve teammates’ SA, foster the trust relationship, and boost the human-automation team’s performance. In this chapter, we will examine the implications of an agent transparency model for human interactions with ML-based agents using automated explanations. We will discuss the application of a particular ML method, reinforcement learning (RL), in Partially Observable Markov Decision Process (POMDP)-based agents, and the design of explanation algorithms for RL in POMDPs.

References

  1. 1.
    Barnes, M.J., Chen, J.Y., Hill, S.G.: Humans and autonomy: implications of shared decision-making for military operations. Technical report ARL-TR-7919, US Army Research Laboratory (2017)Google Scholar
  2. 2.
    Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: Proceedings of the IJCAI Workshop on Explainable AI, pp. 8–13 (2017)Google Scholar
  3. 3.
    Bisantz, A.: Uncertainty visualization and related topics. In: Lee, J., Kirlik, A. (eds.) The Oxford Handbook of Cognitive Engineering. Oxford University Press, Oxford (2013)Google Scholar
  4. 4.
    Boutilier, C., Dearden, R., Goldszmidt, M.: Stochastic dynamic programming with factored representations. Artif. Intell. 121(1), 49–107 (2000)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Calhoun, G., Ruff, H., Behymer, K., Frost, E.: Human-autonomy teaming interface design considerations for multi-unmanned vehicle control. Theoretical Issues in Ergonomics Science (in press)Google Scholar
  6. 6.
    Cassandra, A.R., Kaelbling, L.P., Kurien, J.A.: Acting under uncertainty: discrete Bayesian models for mobile-robot navigation. IROS 2, 963–972 (1996)Google Scholar
  7. 7.
    Chen, J., Lakhmani, S., Stowers, K., Selkowitz, A., Wright, J., Barnes, M.: Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science (in press)Google Scholar
  8. 8.
    Chen, J., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.: Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Technical report ARL-TR-6905, Army Research Laboratory (2014)Google Scholar
  9. 9.
    Chen, J.Y., Barnes, M.J.: Human-agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum. Mach. Syst. 44(1), 13–29 (2014)CrossRefGoogle Scholar
  10. 10.
    de Visser, E.J., Cohen, M., Freedy, A., Parasuraman, R.: A design methodology for trust cue calibration in cognitive agents. In: Shumaker, R., Lackey, S. (eds.) Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments (2014)Google Scholar
  11. 11.
    Defense Science Board: Defense science board summer study on autonomy (2016)Google Scholar
  12. 12.
    Elizalde, F., Sucar, E., Reyes, A., deBuen, P.: An MDP approach for explanation generation. In: Proceedings of the AAAI Workshop on Explanation-Aware Computing, pp. 28–33 (2007)Google Scholar
  13. 13.
    Endsley, M.R.: Situation awareness misconceptions and misunderstandings. J. Cogn. Eng. Decis. Mak. 9(1), 4–32 (2015)CrossRefGoogle Scholar
  14. 14.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)Google Scholar
  15. 15.
    Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1), 99–134 (1998)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Koenig, S., Simmons, R.: Xavier: a robot navigation architecture based on partially observable Markov decision process models. In: Kortenkamp, D., Bonasso, R.P., Murphy, R.R. (eds.) AI Based Mobile Robotics: Case Studies of Successful Robot Systems, pp. 91–122. MIT Press, Cambridge (1998)Google Scholar
  17. 17.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)CrossRefGoogle Scholar
  18. 18.
    Lyons, J.B., Havig, P.R.: Transparency in a human-machine context: approaches for fostering shared awareness/intent. In: Proceedings of the International Conference on Virtual, Augmented and Mixed Reality, pp. 181–190. Springer, Berlin (2014)Google Scholar
  19. 19.
    Marathe, A.: The privileged sensing framework: a principled approach to improved human-autonomy integration. Theoretical Issues in Ergonomics Science (in press)Google Scholar
  20. 20.
    Mercado, J., Rupp, M., Chen, J., Barnes, M., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Hum. Factors 58(3), 401–415 (2016)CrossRefGoogle Scholar
  21. 21.
    Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39(2), 230–253 (1997)CrossRefGoogle Scholar
  22. 22.
    Parasuraman, R., Manzey, D.H.: Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52(3), 381–410 (2010)CrossRefGoogle Scholar
  23. 23.
    Selkowitz, A.R., Lakhmani, S.G., Larios, C.N., Chen, J.Y.: Agent transparency and the autonomous squad member. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 1319–1323 (2016)CrossRefGoogle Scholar
  24. 24.
    Shattuck, L.G.: Transitioning to autonomy: a human systems integration perspective. In: Transitioning to Autonomy: Changes in the Role of Humans in Air Transportation (2015). https://human-factors.arc.nasa.gov/workshop/autonomy/download/presentations/Shaddock%20.pdf
  25. 25.
    Stowers, K., Kasdaglis, N., Rupp, M., Newton, O., Wohleber, R., Chen, J.: Intelligent agent transparency: the design and evaluation of an interface to facilitate human and artificial agent collaboration. In: Proceedings of the Human Factors and Ergonomics Society (2016)CrossRefGoogle Scholar
  26. 26.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  27. 27.
    Swartout, W., Paris, C., Moore, J.: Explanations in knowledge systems: design for explainable expert systems. IEEE Expert 6(3), 58–64 (1991)CrossRefGoogle Scholar
  28. 28.
    Wang, N., Pynadath, D.V., Hill, S.G.: Building trust in a human-robot team. In: Interservice/Industry Training, Simulation and Education Conference (2015)Google Scholar
  29. 29.
    Wang, N., Pynadath, D.V., Hill, S.G.: The impact of POMDP-generated explanations on trust and performance in human-robot teams. In: International Conference on Autonomous Agents and Multiagent Systems (2016)Google Scholar
  30. 30.
    Wang, N., Pynadath, D.V., Hill, S.G., Merchant, C.: The dynamics of human-agent trust with POMDP-generated explanations. In: International Conference on Intelligent Virtual Agents (2017)CrossRefGoogle Scholar
  31. 31.
    Wright, J., Chen, J., Hancock, P., Barnes, M.: The effect of agent reasoning transparency on complacent behavior: an analysis of eye movements and response performance. In: Proceedings of the Human Factors and Ergonomics Society International Annual Meeting (2017)CrossRefGoogle Scholar
  32. 32.
    Wynn, K., Lyons, J.: An integrative model of autonomous agent teammate likeness. Theoretical Issues in Ergonomics Science (in press)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • David V. Pynadath
    • 1
  • Michael J. Barnes
    • 2
  • Ning Wang
    • 1
  • Jessie Y. C. Chen
    • 2
  1. 1.Institute for Creative TechnologiesUniversity of Southern CaliforniaLos AngelesUSA
  2. 2.Human Research and Engineering DirectorateUS Army Research LaboratoryOrlandoUSA

Personalised recommendations