Skip to main content

Advertisement

Log in

A Survey of Mental Modeling Techniques in Human–Robot Teaming

  • Service and Interactive Robotics (A Tapus, Section Editor)
  • Published:
Current Robotics Reports Aims and scope Submit manuscript

Abstract

Purpose of Review

As robots become increasingly prevalent and capable, the complexity of roles and responsibilities assigned to them as well as our expectations for them will increase in kind. For these autonomous systems to operate safely and efficiently in human-populated environments, they will need to cooperate and coordinate with human teammates. Mental models provide a formal mechanism for achieving fluent and effective teamwork during human–robot interaction by enabling awareness between teammates and allowing for coordinated action.

Recent Findings

Much recent research in human–robot interaction has made use of standardized and formalized mental modeling techniques to great effect, allowing for a wider breadth of scenarios in which a robotic agent can act as an effective and trustworthy teammate.

Summary

This paper provides a structured overview of mental model theory and methodology as applied to human–robot teaming. Also discussed are evaluation methods and metrics for various aspects of mental modeling during human–robot interaction, as well as recent emerging applications and open challenges in the field.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Papers of particular interest, published recently, have been highlighted as: • Of importance    •• Of major importance

  1. Engelberger J. F. 2012. Robotics in practice: management and applications of industrial robots Springer Science & Business Media.

  2. Nourbakhsh I. R., Sycara K., Koes M., Yong M., Lewis M., Burion S. Human-robot teaming for search and rescue. IEEE Pervasive Computing 2005;4(1):72–79.

    Article  Google Scholar 

  3. Nikolaidis S, Lasota P., Rossano G., Martinez C., Fuhlbrigge T., Shah J. Human-robot collaboration in manufacturing: Quantitative evaluation of predictable, convergent joint action. IEEE ISR 2013, pp. 1–6 IEEE; 2013.

  4. Dragan A. D., Srinivasa S. S. 2012. Formalizing assistive teleoperation MIT Press.

  5. Rios-Martinez J., Spalanzani A., Laugier C. From proxemics theory to socially-aware navigation: a survey. Int. J. Soc. Robot. 2015;7(2):137–153.

    Article  Google Scholar 

  6. Cohen P. R., Levesque H. J., Nunes J. H., Oviatt S. L. Task-oriented dialogue as a consequence of joint activity. Proceedings of PRICAI-90; 1990. p. 203–208.

  7. Wilson J. R., Rutherford A. Mental models: Theory and application in human factors. Hum. Factors 1989;31(6):617–634.

    Article  Google Scholar 

  8. Craik K. J. W. 1952. The nature of explanation, vol. 445 CUP Archive.

  9. Mathieu J. E., Heffner T. S., Goodwin G. F., Salas E., Cannon-Bowers J.A. The influence of shared mental models on team process and performance. Journal of applied psychology 2000;85(2):273.

    Article  Google Scholar 

  10. Cooke N. J., Salas E., Cannon-Bowers J. A., Stout R. J. Measuring team knowledge. Human factors 2000;42(1):151–173.

    Article  Google Scholar 

  11. Marks M. A., Zaccaro S. J., Mathieu J. E . Performance implications of leader briefings and team-interaction training for team adaptation to novel environments. Journal of applied psychology 2000;85(6):971.

    Article  Google Scholar 

  12. Premack D., Woodruff G. Does the chimpanzee have a theory of mind?. Behavioral and brain sciences 1978;1(4):515–526.

    Article  Google Scholar 

  13. Gopnik A., Sobel D. M., Schulz L. E., Glymour C. Causal learning mechanisms in very young children: two-, three-, and four-year-olds infer causal relations from patterns of variation and covariation. Developmental psychology 2001;37(5):620.

    Article  Google Scholar 

  14. Devin S., Alami R. 2016. An implemented theory of mind to improve human-robot shared plans execution.

  15. Zhao Y., Holtzen S., Gao T., Zhu S.-C. Represent and infer human theory of mind for human-robot interaction. 2015 AAAI fall symposium series, vol. 2; 2015.

  16. Görür O.C., Rosman B.S., Hoffman G., Albayrak S. 2017. Toward integrating theory of mind into adaptive decision-making of social robots to understand human intention.

  17. Scassellati B. Theory of mind for a humanoid robot. Auton. Robot. 2002;12(1):13–24.

    Article  MATH  Google Scholar 

  18. Leyzberg D, Spaulding S., Scassellati B. Personalizing robot tutors to individuals’ learning differences. Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, pp. 423–430 ACM; 2014.

  19. Nikolaidis S, Zhu Y.X., Hsu D., Srinivasa S. Human-robot mutual adaptation in shared autonomy. Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction pp. 294–302 ACM; 2017.

  20. Brooks C., Szafir D. 2019. Building second-order mental models for human-robot interaction. arXiv:1909.06508.

  21. Bolstad C.A., Endsley M.R. Shared mental models and shared displays: an empirical evaluation of team performance. proceedings of the human factors and ergonomics society annual meeting vol. 43, pp. 213–217, SAGE Publications Sage CA: Los Angeles CA; 1999.

  22. Minsky M. 1974. A framework for representing knowledge.

  23. Converse S., Cannon-Bowers J., Salas E. Shared mental models in expert team decision making. Individual and group decision making: Current issues 1993;221:221–46.

    Google Scholar 

  24. Jonker C.M., van Riemsdijk M.B., Vermeulen B. Shared mental models. Coordination, Organizations, Institutions, and Norms in Agent Systems VI M. De Vos, N. Fornara, J.V. Pitt, and G. Vouros, eds. Berlin, Heidelberg, pp. 132–151 Springer Berlin Heidelberg; 2011.

  25. Salas E., Cooke N. J., Rosen M. A. On teams, teamwork, and team performance: discoveries and developments. Human factors 2008;50(3):540–547.

    Article  Google Scholar 

  26. • Hoffman G. Evaluating fluency in human–robot collaboration. IEEE Transactions on Human-Machine Systems 2019;49(3):209–218. This work provides a thorough description of human–robot teaming fluency along with several metrics for evaluating fluency in human–robot shared-location teamwork.

    Article  Google Scholar 

  27. • Nikolaidis S., Nath S., Procaccia A.D., Srinivasa S. Game-theoretic modeling of human adaptation in human-robot collaboration. Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction pp. 323–331; 2017. This work presents a study showing that expectation matching between collaborators (i.e., a robot revealing its capabilities and intent) leads to significantly improved human–robot team performance.

  28. Arnold M., Bellamy R. K., Hind M., Houde S., Mehta S., Mojsilović A., Nair R., Ramamurthy K. N., Olteanu A., Piorkowski D., et al. Factsheets: increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development, vol. 63, no. 2019;4(5): 6–1.

    Google Scholar 

  29. Tabrez A, Hayes B. Improving human-robot interaction through explainable reinforcement learning. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 751–753 IEEE; 2019.

  30. Tabrez A, Agrawal S., Hayes B. 2019. Explanation-based reward coaching to improve human performance via reinforcement learning.

  31. Tellex S., Knepper R., Li V, Rus D., Roy N. 2014. Asking for help using inverse semantics.

  32. Wang N., Pynadath D.V., Hill S.G. Trust calibration within a human-robot team: comparing automatically generated explanations. The Eleventh ACM/IEEE International Conference on Human Robot Interaction pp. 109–116 IEEE Press; 2016.

  33. Miller T. 2018. Explanation in artificial intelligence: insights from the social sciences Artificial Intelligence.

  34. Viganò L., Magazzeni D. 2018. Explainable security. arXiv:1807.04178.

  35. Armstrong S., Mindermann S. 2018. Occam’s razor is insufficient to infer the preferences of irrational agents.

  36. Dennett D.C. 1989. The intentional stance MIT press.

  37. Gergely G., Nádasdy Z., Csibra G., Bíró S. Taking the intentional stance at 12 months of age. Cognition 1995;56(2):165–193.

    Article  Google Scholar 

  38. Ng A. Y., Russell S.J., et al. Algorithms for inverse reinforcement learning. Icml; 2000. p. 2.

  39. Ziebart B. D., Maas A. L., Bagnell J. A., Dey A. K. Maximum entropy inverse reinforcement learning. Aaai, vol. 8, pp. 1433–1438 Chicago IL USA; 2008.

  40. Baker C. L., Saxe R., Tenenbaum J. B. Action understanding as inverse planning. Cognition 2009;113(3):329–349.

    Article  Google Scholar 

  41. Baker C. L., Tenenbaum J. B., Saxe R. R. Goal inference as inverse planning. Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 29; 2007.

  42. Kaelbling L. P., Littman M. L., Cassandra A. R. Planning and acting in partially observable stochastic domains. Artificial intelligence 1998;101(1-2):99–134.

    Article  MathSciNet  MATH  Google Scholar 

  43. Baker C., Saxe R., Tenenbaum J. Bayesian theory of mind: modeling joint belief-desire attribution. Proceedings of the annual meeting of the cognitive science society, vol. 33; 2011.

  44. Baker C.L., Tenenbaum J.B. 2014. Modeling human plan recognition using bayesian theory of mind.

  45. Otsuka M., Osogami T. A deep choice model. Thirtieth AAAI Conference on Artificial Intelligence; 2016.

  46. Osogami T, Otsuka M. Restricted Boltzmann machines modeling human choice. Advances in Neural Information Processing Systems; 2014. p. 73–81.

  47. Sadigh D., Dragan A.D., Sastry S., Seshia S.A. Active preference-based learning of reward functions. robotics: Science and Systems; 2017.

  48. Pellegrinelli S, Admoni H., Javdani S., Srinivasa S. Human-robot shared workspace collaboration via hindsight optimization. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) pp. 831–838 IEEE; 2016.

  49. Palan M., Landolfi N. C., Shevchuk G., Sadigh D. 2019. Learning reward functions by integrating human demonstrations and preferences. arXiv:1906.08928.

  50. Dragan A.D., Lee K.C., Srinivasa S.S. Legibility and predictability of robot motion. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 301–308 IEEE; 2013.

  51. Tversky A., Kahneman D. Judgment under uncertainty:, heuristics and biases. science 1974; 185(4157):1124–1131.

    Article  Google Scholar 

  52. Simon H. A. Rational decision making in business organizations. The American economic review 1979;69(4):493–513.

    Google Scholar 

  53. Kwon M., Biyik E., Talati A., Bhasin K., Losey D.P., Sadigh D. 2020. arXiv:2001.04377.

  54. Gmytrasiewicz P.J., Durfee E.H. Rational coordination in multi-agent environments. Auton. Agent. Multi-Agent Syst. 2000;3(4):319–350.

    Article  Google Scholar 

  55. Wilks Y., Ballim A. 1986. Multiple agents and the heuristic ascription of belief Computing Research Laboratory New Mexico State University.

  56. Huang S. H., Held D., Abbeel P., Dragan A. D. Enabling robots to communicate their objectives. Auton. Robot. 2019;43(2):309–326.

    Article  Google Scholar 

  57. Gmytrasiewicz P. J., Doshi P. A framework for sequential planning in multi-agent settings. J. Artif. Intell. Res. 2005;24:49–79.

    Article  MATH  Google Scholar 

  58. Nikolaidis S, Shah J. Human-robot cross-training: computational formulation, modeling and evaluation of a human team training strategy. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 33–40 IEEE; 2013.

  59. Hadfield-Menell D., Russell S.J., Abbeel P., Dragan A. Cooperative inverse reinforcement learning. Advances in neural information processing systems; 2016. p. 3909–3917.

  60. Kulkarni A., Zha Y., Chakraborti T., Vadlamudi S. G., Zhang Y., Kambhampati S. 2016. Explicablility as minimizing distance from expected behavior. arXiv:1611.05497.

  61. Lee J.J., Sha F., Breazeal C. A Bayesian theory of mind approach to nonverbal communication. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 487–496 IEEE ; 2019.

  62. Hayes B., Scassellati B. Effective robot teammate behaviors for supporting sequential manipulation tasks. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2015.

  63. Chakraborti T., Kambhampati S., Scheutz M., Zhang Y. 2017. AI challenges in human-robot cognitive teaming. arXiv:1707.04775.

  64. Dragan A.D. 2017. arXiv:1705.04226.

  65. Grice H.P. Logic and conversation. Speech acts, pp. 41–58 Brill; 1975.

  66. Briggs G., Scheutz M. Facilitating mental modeling in collaborative human-robot interaction through adverbial cues. proceedings of the SIGDIAL 2011 Conference; 2011. p. 239–247.

  67. Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 2019; 267:1–38.

    Article  MathSciNet  MATH  Google Scholar 

  68. Hayes B., Shah J.A. Improving robot controller transparency through autonomous policy explanation. 2017 12th ACM/IEEE International Conference on Human-Robot Interaction HRI pp. 303–312 IEEE; 2017.

  69. • Chakraborti T., Sreedharan S., Zhang Y., Kambhampati S. 2017. Plan explanations as model reconciliation: moving beyond explanation as soliloquy. This work characterizes the problem of model reconciliation, wherein a AI system suggests changes to a human teammate’s model through explanation based on the divergence between their respective models.

  70. Thomaz A., Hoffman G., Cakmak M. Computational human-robot interaction. Foundations and Trends in Robotics 2016;4(2-3):105–223.

    Article  Google Scholar 

  71. Hoffman G., Breazeal C. Cost-based anticipatory action selection for human–robot fluency. IEEE transactions on robotics 2007;23(5):952–961.

    Article  Google Scholar 

  72. Lee J.D., See K.A. Trust in automation: designing for appropriate reliance. Human factors 2004;46(1):50–80.

    Article  Google Scholar 

  73. Hancock P. A., Billings D. R., Schaefer K. E., Chen J. Y., De Visser E.J., Parasuraman R. A meta-analysis of factors affecting trust in human-robot interaction. Human factors 2011;53(5):517–527.

    Article  Google Scholar 

  74. Kwon M., Jung M.F., Knepper R.A. Human expectations of social robots. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 463–464 IEEE; 2016.

  75. Lewis M., Sycara K., Walker P. 2018. The role of trust in human-robot interaction, pp. 135–159 cham: Springer International Publishing.

  76. Sebo S.S., Krishnamurthi P., Scassellati B. I don’t believe you : investigating the effects of robot trust violation and repair. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 57–65 IEEE ; 2019.

  77. Zahedi Z, Olmo A., Chakraborti T., Sreedharan S., Kambhampati S. 2019. Towards understanding user preferences for explanation types in model reconciliation, .

  78. Ciocirlan S.-D., Agrigoroaie R., Tapus A. 2019. Human-robot team: effects of communication in analyzing trust.

  79. Chakraborti T., Sreedharan S., Grover S., Kambhampati S. 2018. Plan explanations as model reconciliation–an empirical study. arXiv:1802.01013.

  80. Kwon M., Huang S. H., Dragan A. D. Expressing robot incapability. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction; 2018. p. 87–95.

  81. Kulkarni A., Zha Y., Chakraborti T., Vadlamudi S.G., Zhang Y., Kambhampati S. Explicable planning as minimizing distance from expected behavior. Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems pp. 2075–2077 International Foundation for Autonomous Agents and Multiagent Systems; 2019.

  82. Wallkotter S., Tulli S., Castellano G., Paiva A., Chetouani M. 2020. Explainable agents through social cues: a review. arXiv:2003.05251.

  83. Sadigh D., Landolfi N., Sastry S. S., Seshia S. A., Dragan A. D. Planning for cars that coordinate with people: leveraging effects on human actions for planning and active information gathering over human internal state. Auton. Robot. 2018;42(7):1405–1426.

    Article  Google Scholar 

  84. Hägele M., Nilsson K., Pires J.N., Bischoff R. Industrial robotics. Springer handbook of robotics, pp. 1385–1422 Springer; 2016.

  85. Unhelkar V. V., Lasota P. A., Tyroller Q., Buhai R. -D., Marceau L., Deml B., Shah J. A. Human-aware robotic assistant for collaborative assembly: integrating human motion prediction with planning in time. IEEE Robotics and Automation Letters 2018;3(3):2394–2401.

    Article  Google Scholar 

  86. Chakraborti T., Zhang Y., Smith D.E., Kambhampati S. Planning with resource conflicts in human-robot cohabitation. Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems; 2016. p. 1069–1077.

  87. Gombolay M., Wilcox R., Shah J. 2013. Fast scheduling of multi-robot teams with temporospatial constraints.

  88. Baraglia J, Cakmak M., Nagai Y., Rao R., Asada M. Initiative in robot assistance during collaborative task execution. 2016 11th ACM/IEEE international conference on human-robot interaction (HRI) pp. 67–74 IEEE; 2016.

  89. Rosen E., Whitney D., Phillips E., Chien G., Tompkin J., Konidaris G., Tellex S. Communicating robot arm motion intent through mixed reality head-mounted displays. Robotics Research pp. 301–316 Springer; 2020.

  90. Walker M., Hedayati H., Lee J., Szafir D. Communicating robot motion intent with augmented reality. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction; 2018. p. 316–324.

  91. Luebbers M.B., Brooks C., Kim M.J., Szafir D., Hayes B. Augmented reality interface for constrained learning from demonstration. Proceedings of the 2nd International Workshop on Virtual, Augmented and Mixed Reality for HRI (VAM-HRI); 2019.

  92. Gregory J. M., Reardon C., Lee K., White G., Ng K., Sims C. Enabling intuitive human-robot teaming using augmented reality and gesture control. arXiv:1909.06415; 2019.

  93. Sadigh D., Sastry S., Seshia S. A., Dragan A. D. Planning for autonomous cars that leverage effects on human actions. robotics: Science and Systems 2 Ann Arbor, MI USA; 2016.

  94. Tabrez A., Luebbers M. B., Hayes B. Automated failure-mode clustering and labeling for informed car-to-driver handover in autonomous vehicles; 2020. arXiv:2005.04439.

  95. Leyzberg D., Ramachandran A., Scassellati B. The effect of personalization in longer-term robot tutoring. ACM Transactions on Human-Robot Interaction (THRI) 2018;7(3):19.

    Google Scholar 

  96. Williams T., Zhu Q., Wen R., de Visser E.J. The Confucian matador: three defenses against the mechanical bull. Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction; 2020. p. 25–33.

  97. Jackson R.B., Williams T. Language-capable robots may inadvertently weaken human moral norms. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 401–410 IEEE ; 2019.

  98. Banks J. A perceived moral agency scale: development and validation of a metric for humans and social machines. Comput. Hum. Behav. 2019;90:363–371.

    Article  Google Scholar 

  99. Scheutz M., Malle B., Briggs G. Towards morally sensitive action selection for autonomous social robots. 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) pp. 492–497 IEEE; 2015.

  100. Chakraborti T, Kambhampati S. 2018. Algorithms for the greater good! on mental modeling and acceptable symbiosis in human-AI collaboration. arXiv:1801.09854.

Download references

Funding

This work was funded in part by NSF Award #1830686 and the U.S. Army Research Lab STRONG program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aaquib Tabrez.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection on Service and Interactive Robotics

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tabrez, A., Luebbers, M.B. & Hayes, B. A Survey of Mental Modeling Techniques in Human–Robot Teaming. Curr Robot Rep 1, 259–267 (2020). https://doi.org/10.1007/s43154-020-00019-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43154-020-00019-0

Keywords

Navigation