Advertisement

Robot Authority in Human-Machine Teams: Effects of Human-Like Appearance on Compliance

  • Kerstin S. HaringEmail author
  • Ariana Mosley
  • Sarah Pruznick
  • Julie Fleming
  • Kelly Satterfield
  • Ewart J. de Visser
  • Chad C. Tossell
  • Gregory Funke
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11575)

Abstract

Current technology allows for the deployment of security patrol and police robots. It is expected that in the near future robots and similar technologies will exhibit some degree of authority over people within human-machine teams. Studies in classical psychology investigating compliance have shown that people tend to comply with requests from others who display or are assumed to have authority. In this study, we investigated the effect of a robot’s human-like appearance on compliance with a request. We compared two different robots to a human control condition. The robots assumed the role of a coach in learning a difficult task. We hypothesized that participants would have higher compliance with robots high compared to robots low in human-like appearance. The coach continuously prompts the participant to continue to practice the task beyond the time the participant wishes to actually proceed. Compliance was measured by time practiced after the first prompt and the total number of images processed. Results showed that compliance with the request was the highest with a human and compliance with both robots was significantly lower. However, we showed that robots can be used as persuasive coaches that can help a human teammate to persist in training task. There were no differences between the High and Low Human-Like robot for compliance time, however the Low Human-Like robot has people practise on more images than the High Human-Like robot. The implication of this study is that robots are currently inferior to humans when it comes to compliance in a human-machine team. Future robots need to be carefully designed in an authoritative way if maximizing compliance to their requests is the primary goal.

Keywords

Human-robot interaction Human-machine teaming Anthropomorphism Machine authority Compliance 

References

  1. 1.
    Salem, M., Dautenhahn, K.: Evaluating trust and safety in HRI: practical issues and ethical challenges. In: Emerging Policy and Ethics of Human-Robot Interaction (2015)Google Scholar
  2. 2.
    Sharkey, N.: The robot arm of the law grows longer. Computer 42(8), 115–116 (2009)CrossRefGoogle Scholar
  3. 3.
    Pennisi, A., et al.: Multi-robot surveillance through a distributed sensor network. In: Koubâa, A., Martínez-de Dios, J.R. (eds.) Cooperative Robots and Sensor Networks 2015. SCI, vol. 604, pp. 77–98. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-18299-5_4CrossRefGoogle Scholar
  4. 4.
    Dennis, L.A., Fisher, M., Lincoln, N.K., Lisitsa, A., Veres, S.M.: Practical verification of decision-making in agent-based autonomous systems. Autom. Softw. Eng. 23(3), 305–359 (2016)CrossRefGoogle Scholar
  5. 5.
    Li, L., Ota, K., Dong, M.: Humanlike driving: empirical decision-making system for autonomous vehicles. IEEE Trans. Veh. Technol. 67(8), 6814–6823 (2018)CrossRefGoogle Scholar
  6. 6.
    Cunningham, A.G., Galceran, E., Mehta, D., Ferrer, G., Eustice, R.M., Olson, E.: MPDM: multi-policy decision-making from autonomous driving to social robot navigation. In: Waschl, H., Kolmanovsky, I., Willems, F. (eds.) Control Strategies for Advanced Driver Assistance Systems and Autonomous Driving Functions. LNCIS, vol. 476, pp. 201–223. Springer, Cham (2019).  https://doi.org/10.1007/978-3-319-91569-2_10CrossRefGoogle Scholar
  7. 7.
    Long, S.K., Karpinsky, N.D., Bliss, J.P.: Trust of simulated robotic peacekeepers among resident and expatriate Americans. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 61, no. 1, pp. 2091–2095. SAGE Publications, Los Angeles (2017)CrossRefGoogle Scholar
  8. 8.
    Agrawal, S., Williams, M.-A.: Robot authority and human obedience: a study of human behaviour using a robot security guard. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 57–58. ACM (2017)Google Scholar
  9. 9.
    Hoffman, G., et al.: Robot presence and human honesty: experimental evidence. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 181–188. ACM (2015)Google Scholar
  10. 10.
    Benitez, J., Wyman, A.B., Carpinella, C.M., Stroessner, S.J.: The authority of appearance: how robot features influence trait inferences and evaluative responses. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 397–404. IEEE (2017)Google Scholar
  11. 11.
    Inbar, O., Meyer, J.: Manners matter: trust in robotic peacekeepers. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 59, no. 1, pp. 185–189. SAGE Publications, Los Angeles (2015)CrossRefGoogle Scholar
  12. 12.
    Geiskkovitch, D.Y., Cormier, D., Seo, S.H., Young, J.E.: Please continue, we need more data: an exploration of obedience to robots. J. Hum.-Robot Interact. 5(1), 82–99 (2016)CrossRefGoogle Scholar
  13. 13.
    Milgram, S.: Behavioral study of obedience. J. Abnormal Soc. Psychol. 67(4), 371 (1963)CrossRefGoogle Scholar
  14. 14.
    Milgram, S., Gudehus, C.: Obedience to authority (1978)Google Scholar
  15. 15.
    Meeus, W.H., Raaijmakers, Q.A.: Obedience in modern society: the Utrecht studies. J. Soc. Issues 51(3), 155–175 (1995)CrossRefGoogle Scholar
  16. 16.
    Haney, C., Banks, W.C., Zimbardo, P.G.: A study of prisoners and guards in a simulated prison. Naval Res. Rev. 9, 1–17 (1973)Google Scholar
  17. 17.
    Burger, J.M.: Replicating milgram: would people still obey today? Am. Psychol. 64(1), 1 (2009)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Weiss, D.J.: Deception by researchers is necessary and not necessarily evil. Behav. Brain Sci. 24(3), 431–432 (2001)CrossRefGoogle Scholar
  19. 19.
    Masters, K.S.: Milgram, stress research, and the institutional review board (2009)CrossRefGoogle Scholar
  20. 20.
    Haring, K.S., Watanabe, K., Velonaki, M., Tossell, C.C., Finomore, V.: FFAB-the form function attribution bias in human-robot interaction. IEEE Trans. Cogn. Dev. Syst. 10(4), 843–851 (2018)CrossRefGoogle Scholar
  21. 21.
    Haring, K.S., Matsumoto, Y., Watanabe, K.: How do people perceive and trust a lifelike robot. In: Proceedings of the World Congress on Engineering and Computer Science, vol. 1 (2013)Google Scholar
  22. 22.
    Haring, K.S., Watanabe, K., Silvera-Tawil, D., Velonaki, M., Takahashi, T.: Changes in perception of a small humanoid robot. In: 2015 6th International Conference on Automation, Robotics and Applications (ICARA), pp. 83–89. IEEE (2015)Google Scholar
  23. 23.
    Hinds, P.J., Roberts, T.L., Jones, H.: Whose job is it anyway? A study of human-robot interaction in a collaborative task. Hum.-Comput. Interact. 19(1), 151–181 (2004)CrossRefGoogle Scholar
  24. 24.
    Waytz, A., Morewedge, C.K., Epley, N., Monteleone, G., Gao, J.-H., Cacioppo, J.T.: Making sense by making sentient: effectance motivation increases anthropomorphism. J. Pers. Soc. Psychol. 99(3), 410 (2010)CrossRefGoogle Scholar
  25. 25.
    Waytz, A., Gray, K., Epley, N., Wegner, D.M.: Causes and consequences of mind perception. Trends Cogn. Sci. 14(8), 383–388 (2010)CrossRefGoogle Scholar
  26. 26.
    Constable, S., Shuler, Z., Klaber, L., Rakauskas, M.: Conformity, compliance, and obedience (1999). [Online]. https://www.units.miamioh.edu/psybersite/cults/cco.shtml
  27. 27.
    Phillips, E., Zhao, X., Ullman, D., Malle, B.F.: What is human-like?: Decomposing robots’ human-like appearance using the anthropomorphic robot (abot) database. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 105–113. ACM (2018)Google Scholar
  28. 28.
    Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 30(3), 286–297 (2000)CrossRefGoogle Scholar
  29. 29.
    Beer, J.M., Fisk, A.D., Rogers, W.A.: Toward a framework for levels of robot autonomy in human-robot interaction. J. Hum.-Robot Interact. 3(2), 74–99 (2014)CrossRefGoogle Scholar
  30. 30.
    Onnasch, L., Wickens, C.D., Li, H., Manzey, D.: Human performance consequences of stages and levels of automation: an integrated meta-analysis. Hum. Factors 56(3), 476–488 (2014)CrossRefGoogle Scholar
  31. 31.
    Fessler, D.M., Holbrook, C., Snyder, J.K.: Weapons make the man (larger): formidability is represented as size and strength in humans. PloS One 7(4), e32751 (2012)CrossRefGoogle Scholar
  32. 32.
    Satterfield, K., et al.: Investigating compliance in human-robot teaming. In: 2nd International Conference on Intelligent Human Systems Integration (IHSI 2019): Integrating People and Intelligent Systems (2019)Google Scholar
  33. 33.
    McCambridge, J., Witton, J., Elbourne, D.R.: Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. J. Clin. Epidemiol. 67(3), 267–277 (2014)CrossRefGoogle Scholar
  34. 34.
    Wickström, G., Bendix, T.: The Hawthorne effect-what did the original Hawthorne studies actually show? Scand. J. Work Environ. Health 26, 363–367 (2000)Google Scholar
  35. 35.
    Parsons, H.M.: What happened at Hawthorne?: New evidence suggests the Hawthorne effect resulted from operant reinforcement contingencies. Science 183(4128), 922–932 (1974)CrossRefGoogle Scholar
  36. 36.
    Lucas, G.M., Gratch, J., King, A., Morency, L.-P.: It’s only a computer: virtual humans increase willingness to disclose. Comput. Hum. Behav. 37, 94–100 (2014)CrossRefGoogle Scholar
  37. 37.
    Linder, S.P., Nestrick, B.E., Mulders, S., Lavelle, C.L.: Facilitating active learning with inexpensive mobile robots. J. Comput. Sci. Coll. 16(4), 21–33 (2001)Google Scholar
  38. 38.
    Hashimoto, T., Kato, N., Kobayashi, H.: Development of educational system with the android robot SAYA and evaluation. Int. J. Adv. Robot. Syst. 8(3), 28 (2011)CrossRefGoogle Scholar
  39. 39.
    Saerbeck, M., Schut, T., Bartneck, C., Janse, M.D.: Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1613–1622. ACM (2010)Google Scholar
  40. 40.
    Srinivasan, V., Takayama, L.: Help me please: robot politeness strategies for soliciting help from humans. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 4945–4955. ACM (2016)Google Scholar
  41. 41.
    Hayes, C.C., Miller, C.A.: Human-Computer Etiquette: Cultural Expectations and the Design Implications they Place on Computers and Technology. CRC Press, Boca Raton (2010)CrossRefGoogle Scholar
  42. 42.
    Parasuraman, R., Miller, C.A.: Trust and etiquette in high-criticality automated systems. Commun. ACM 47(4), 51–55 (2004)CrossRefGoogle Scholar
  43. 43.
    de Visser, E.J., Pak, R., Shaw, T.H.: From ‘automation’ to ‘autonomy’: the importance of trust repair in human-machine interaction, Ergonomics 61, 1409–1427 (2018)CrossRefGoogle Scholar
  44. 44.
    de Visser, E.J., et al.: Almost human: anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol.: Appl. 22(3), 331 (2016)Google Scholar
  45. 45.
    Jackson, R.B., Wen, R., Williams, T.: Tact in noncompliance: the need for pragmatically apt responses to unethical commands (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.United States Air Force Academy, Warfighter Effectiveness Research CenterAF AcademyUSA
  2. 2.Adler UniversityChicagoUSA
  3. 3.Air Force Research LaboratoryWright-Patterson Air Force BaseDaytonUSA
  4. 4.Daniel Felix Ritchie School of Engineering and Computer ScienceUniversity of DenverDenverUSA

Personalised recommendations