Advertisement

A Proposed Approach for Determining the Influence of Multimodal Robot-of-Human Transparency Information on Human-Agent Teams

  • Shan LakhmaniEmail author
  • Julian AbichIV
  • Daniel Barber
  • Jessie Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9744)

Abstract

Autonomous agents, both software and robotic, are becoming increasingly common. They are being used to supplement human operators in accomplishing complex tasks, often acting as collaborators or teammates. Agents can be designed to keep their human operators ‘in the loop’ by reporting information concerning their internal decision making process. This transparency can be expressed in a number of ways, including the communication of the human and agent’s respective responsibilities. Agents can communicate information supporting transparency to human operators using visual, auditory, or a combination of both modalities. Based on this information, we suggest an approach to exploring the utility of the teamwork model of transparency. We propose some considerations for future research into feedback supporting teamwork transparency, including multimodal communication methods, human-like feedback, and the use of multiple forms of automation transparency.

Keywords

Multimodal communication Human-robot interaction Transparency Human-agent teaming 

References

  1. 1.
    Saade, R., Vahidov, R., Yu, B.: Agents and E-commerce: beyond automation. In: Americas Conference on Information Systems. Puerto Rico (2015)Google Scholar
  2. 2.
    Chen, J.Y., Barnes, M.J.: Human–agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum.-Mach. Syst. 44(1), 13–29 (2014)CrossRefGoogle Scholar
  3. 3.
    Yen, J., et al.: Agents with shared mental models for enhancing team decision makings. Decis. Support Syst. 41(3), 634–653 (2006)CrossRefGoogle Scholar
  4. 4.
    Sheridan, T.B., Parasuraman, R.: Human-automation interaction. Rev. Hum. Factors Ergon. 1(1), 89–129 (2005)CrossRefGoogle Scholar
  5. 5.
    Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30(3), 286–297 (2000)CrossRefGoogle Scholar
  6. 6.
    Zhu, H., Hou, M.: A Literature Review on Operator Interface Technologies for Network Enabled Operational Environments Using Complex System Analysis. W7711-083931/001/TOR: Defence R & D Canada, Toronto (2009)Google Scholar
  7. 7.
    de Visser, E.J., Cohen, M., Freedy, A., Parasuraman, R.: A design methodology for trust cue calibration in cognitive agents. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014, Part I. LNCS, vol. 8525, pp. 251–262. Springer, Heidelberg (2014)Google Scholar
  8. 8.
    Wooldridge, M., Jennings, N.R.: Intelligent agents: theory and practice. Knowl. Eng. Rev. 10(2), 115–152 (1995)CrossRefGoogle Scholar
  9. 9.
    Sycara, K., Sukthankar, G.: Literature review of teamwork models, Carnegie Mellon University CMU-RI-TR-06-50 (2006)Google Scholar
  10. 10.
    Sukthankar, G., Shumaker, R., Lewis, M.: Intelligent agents as teammates. In: Theories of Team Cognition: Cross-Disciplinary Perspectives, pp. 313–343 (2012)Google Scholar
  11. 11.
    Urlings, P., et al.: A future framework for interfacing BDI agents in a real-time teaming environment. J. Netw. Comput. Appl. 29(2), 105–123 (2006)Google Scholar
  12. 12.
    Atkinson, D.J., Clancey, W.J., Clark, M.H.: Shared awareness, autonomy and trust in human-robot teamwork. In: Papers from the 2014 AAAI Spring Symposium on Artificial Intelligence and Human-Computer Interaction (2014)Google Scholar
  13. 13.
    Shah, J., Breazeal, C.: An empirical analysis of team coordination behaviors and action planning with application to human–robot teaming. Hum. Factors J. Hum. Factors Ergon. Soc. 52(2), 234–245 (2010)CrossRefGoogle Scholar
  14. 14.
    Cannon-Bowers, J.A., Bowers, C.A., Sanchez, A.: Using synthetic learning environments to train teams. Work group learning: Understanding, improving and assessing how groups learn in organizations, pp. 315–346 (2008)Google Scholar
  15. 15.
    Cannon-Bowers, J.A., Salas, E.: Reflections on shared cognition. J. Organ. Behav. 22(2), 195–202 (2001)CrossRefGoogle Scholar
  16. 16.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 46(1), 50–80 (2004)CrossRefGoogle Scholar
  17. 17.
    Lyons, J.B., Havig, P.R.: Transparency in a human-machine context: approaches for fostering shared awareness/intent. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014, Part I. LNCS, vol. 8525, pp. 181–190. Springer, Heidelberg (2014)Google Scholar
  18. 18.
    Chen, J.Y., et al.: Situation Awareness-Based Agent Transparency. Army Research Laboratory (ARL): ARL-TR-6905, Aberdeen Proving Grounds, MD (2014)Google Scholar
  19. 19.
    Lyons, J.B.: Being transparent about transparency: a model for human-robot interaction. In: 2013 AAAI Spring Symposium Series (2013)Google Scholar
  20. 20.
    Cramer, H., et al.: The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User-Adap. Inter. 18(5), 455–496 (2008)CrossRefGoogle Scholar
  21. 21.
    Rao, A.S., Georgeff, M.P.: BDI agents: from theory to practice. In: ICMAS (1995)Google Scholar
  22. 22.
    Hoffman, R.: An integrated model of macrocognitive work and trust in automation. In: AAAI Spring Symposium: Trust and Autonomous Systems (2013)Google Scholar
  23. 23.
    Adams, J.A.: Human-robot interaction design: understanding user needs and requirements. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. SAGE Publications (2005)Google Scholar
  24. 24.
    Smith, K., Hancock, P.: Situation awareness is adaptive, externally directed consciousness. Hum. Factors J. Hum. Factors Ergon. Soc. 37(1), 137–148 (1995)CrossRefGoogle Scholar
  25. 25.
    Miller, C.A.: Delegation and transparency: coordinating interactions so information exchange is no surprise. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014, Part I. LNCS, vol. 8525, pp. 191–202. Springer, Heidelberg (2014)Google Scholar
  26. 26.
    Mercado, J.E., et al.: Effects of agent transparency on multi-robot management effectiveness. Army Research Laboratory (ARL): ARL-TR-7466, Aberdeen Proving Grounds, MD (2015)Google Scholar
  27. 27.
    Stubbs, K., Wettergreen, D., Hinds, P.H.: Autonomy and common ground in human-robot interaction: a field study. IEEE Intell. Syst. 22(2), 42–50 (2007)CrossRefGoogle Scholar
  28. 28.
    Abich, J.: Investigating the universality and comprehensive ability of measures to assess the state of workload. Doctoral Dissertation, University of Central Florida (2013)Google Scholar
  29. 29.
    Hart, S.G.: NASA-task load index (NASA-TLX); 20 years later. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Sage Publications (2006)Google Scholar
  30. 30.
    Hancock, P., Warm, J.: A dynamic model of stress and sustained attention. Hum. Factors 31(5), 519–537 (1989)Google Scholar
  31. 31.
    Mercado, J.E., et al.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors. In PressGoogle Scholar
  32. 32.
    Wright, J., et al.: Agent Reasoning Transparency’s Effect on Operator Workload. Manuscript Submitted for Publication (2016)Google Scholar
  33. 33.
    Mathieu, J., et al.: Team effectiveness 1997-2007: a review of recent advancements and a glimpse into the future. J. Manage. 34(3), 410–476 (2008)MathSciNetGoogle Scholar
  34. 34.
    Salas, E., et al.: Does team training work? principles for health care. Acad. Emerg. Med. 15(11), 1002–1009 (2008)CrossRefGoogle Scholar
  35. 35.
    Kim, T., Hinds, P.: Who should I blame? effects of autonomy and transparency on attributions in human-robot interaction. In: The 15th IEEE International Symposium on Robot and Human Interactive Communication, 2006. IEEE (2006)Google Scholar
  36. 36.
    Vitense, H.S., Jacko, J.A., Emery, V.K.: Multimodal feedback: an assessment of performance and mental workload. Ergonomics 46(1–3), 68–87 (2003)CrossRefGoogle Scholar
  37. 37.
    Wickens, C.D.: Multiple resources and mental workload. Hum. Factors J. Hum. Factors Ergon. Soc. 50(3), 449–455 (2008)CrossRefGoogle Scholar
  38. 38.
    Partan, S.R., Marler, P.: Issues in the classification of multimodal communication signals. Am. Nat. 166(2), 231–245 (2005)CrossRefGoogle Scholar
  39. 39.
    Moreno, R., Mayer, R.: Interactive multimodal learning environments. Educ. Psychol. Rev. 19(3), 309–326 (2007)CrossRefGoogle Scholar
  40. 40.
    Merkt, M., et al.: Learning with videos vs. learning with print: The role of interactive features. Learn. Instruction 21(6), 687–704 (2011)Google Scholar
  41. 41.
    Parasuraman, R., Miller, C.A.: Trust and etiquette in high-criticality automated systems. Commun. ACM 47(4), 51–55 (2004)CrossRefGoogle Scholar
  42. 42.
    Woo, H.L.: Designing multimedia learning environments using animated pedagogical agents: factors and issues. J. Comput. Assist. Learn. 25(3), 203–218 (2009)CrossRefGoogle Scholar
  43. 43.
    Beskow, J.: Animation of talking agents. In: Audio-Visual Speech Processing: Computational & Cognitive Science Approaches (1997)Google Scholar
  44. 44.
    Mayer, R.E.: The promise of multimedia learning: using the same instructional design methods across different media. Learn. Instruction 13(2), 125–139 (2003)CrossRefGoogle Scholar
  45. 45.
    Krämer, N.C.: Psychological research on embodied conversational agents: the case of pedagogical agents. J. Media Psychol. 22, 47–51 (2010)CrossRefGoogle Scholar
  46. 46.
    Moreno, R., Reislein, M., Ozogul, G.: Using virtual peers to guide visual attention during learning. J. Media Psychol. 22(2), 52–60 (2010)CrossRefGoogle Scholar
  47. 47.
    Salem, M., et al.: Generation and evaluation of communicative robot gesture. Int. J. Soc. Rob. 4(2), 201–217 (2012)CrossRefGoogle Scholar
  48. 48.
    Ososky, S., et al.: Building appropriate trust in human-robot teams. In: 2013 AAAI Spring Symposium Series (2013)Google Scholar
  49. 49.
    Perzanowski, D., et al.: Building a multimodal human-robot interface. IEEE Intell. Syst. 16(1), 16–21 (2001)CrossRefGoogle Scholar
  50. 50.
    Baraka, K., Paiva, A., Veloso, M.: Expressive lights for revealing mobile service robot state. In: Reis, L.P., Moreira, A.P., Lima, P.U., Montano, L., Muñoz-Martinez, V. (eds.) Robot 2015: Second Iberian Robotics Conference. Advances in Intelligent Systems and Computing, vol. 417, pp. 107–119. Springer, Heidelberg (2016)CrossRefGoogle Scholar
  51. 51.
    Sims, V.K., et al.: Robots’ auditory cues are subject to anthropomorphism. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. SAGE Publications (2009)Google Scholar
  52. 52.
    Gong, L., Lai, J.: Shall we mix synthetic speech and human speech?: impact on users’ performance, perception, and attitude. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM (2001)Google Scholar
  53. 53.
    Delogu, C., Paoloni, A., Pocci, P.: New directions in the evaluation of voice input/output systems. IEEE J. Sel. Areas Commun. 9(4), 566–573 (1991)CrossRefGoogle Scholar
  54. 54.
    Reynolds, M.E., Isaacs-Duvall, C., Haddox, M.L.: A comparison of learning curves in natural and synthesized speech comprehension. J. Speech Lang. Hear. Res. 45(4), 802–810 (2002)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Shan Lakhmani
    • 1
    Email author
  • Julian AbichIV
    • 1
  • Daniel Barber
    • 1
  • Jessie Chen
    • 2
  1. 1.University of Central Florida (UCF), Institute for Simulation and Training (IST)OrlandoUSA
  2. 2.U.S. Army Research LaboratoryOrlandoUSA

Personalised recommendations