Decision-Theoretic Human-Robot Interaction: Designing Reasonable and Rational Robot Behavior

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9979)

Abstract

Autonomous robots are moving out of research labs and factory cages into public spaces; people’s homes, workplaces, and lives. A key design challenge in this migration is how to build autonomous robots that people want to use and can safely collaborate with in undertaking complex tasks. In order for people to work closely and productively with robots, robots must behave in way that people can predict and anticipate. Robots chose their next action using the classical sense-think-act processing cycle. Robotists design actions and action choice mechanisms for robots. This design process determines robot behaviors, and how well people are able to interact with the robot. Crafting how a robot will choose its next action is critical in designing social robots for interaction and collaboration. This paper identifies reasonableness and rationality, two key concepts that are well known in Choice Theory, that can be used to guide the robot design process so that the resulting robot behaviors are easier for humans to predict, and as a result it is more enjoyable for humans to interact and collaborate. Designers can use the notions of reasonableness and rationality to design action selection mechanisms to achieve better robot designs for human-robot interaction. We show how Choice Theory can be used to prove that specific robot behaviors are reasonable and/or rational, thus providing a formal, useful and powerful design guide for developing robot behaviors that people find more intuitive, predictable and fun, resulting in more reliable and safe human-robot interaction and collaboration.

Keywords

Human-robot interaction Designing robot behavior Legible robot behavior Predictable robot behavior Choice theory 

References

  1. 1.
    Allingham, M.: Choice Theory. Oxford Press, Oxford (2002)CrossRefGoogle Scholar
  2. 2.
    Arrow, K.: Essays on the Theory of Risk Bearing (1971)Google Scholar
  3. 3.
    Bem, D.J., Allen, A.: On predicting some of the people some of the time: the search for cross-situational consistencies in behavior. Psychol. Rev. 81, 506–520 (1974)CrossRefGoogle Scholar
  4. 4.
    Dragan, A.D., Lee, K.C., Srinivasa, S.S.: Legibility and predictability of robot motion. In: Human-Robot Interaction, 2013 8th ACM/IEEE International Conference, pp. 301–308. IEEE, March 2013Google Scholar
  5. 5.
    Epstein, S.: The stability of behavior: I. On predicting most of the people much of the time. J. Pers. Soc. Psychol. 37(7), 1097–1126 (1979)CrossRefGoogle Scholar
  6. 6.
    Hoffman, G., Breazeal, C.: Cost-based anticipatory action selection for human-robot fluency. IEEE Trans. Robot. (T-RO) 23, 952–961 (2007)CrossRefGoogle Scholar
  7. 7.
    Kreps, D.M.: Notes on the Theory of Choice. Westview Press, Boulder (1988)Google Scholar
  8. 8.
    Kruse, T., Basili, P., Glasauer, S., Kirsch, A.: Legible robot navigation in the proximity of moving humans. In: May 2012 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), pp. 83–88. IEEE (2012)Google Scholar
  9. 9.
    Novianto, R., Johnston, B., Williams, M-A.: Attention in the ASMO cognitive architecture. In: Proceedings of the First Annual Meeting of the BICA Society, vol. 221 of Frontiers in Artificial Intelligence and Applications, pp. 98–105 (2012)Google Scholar
  10. 10.
    Novianto, R., Johnston, B., Williams, M.-A.: Habituation and sensitisation learning in ASMO cognitive architecture. In: Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U. (eds.) ICSR 2012. LNCS, vol. 8239, pp. 249–259. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  11. 11.
    Novianto, R., Williams, M.-A.: The role of attention in robot self-awareness. In: 18th IEEE International Symposium Robot and Human Interactive Communication, pp. 1047–1053 (2009)Google Scholar
  12. 12.
    Simon, H.: A behavioral model of rational choice. Q. J. Econ. 69, 99–188 (1955)CrossRefGoogle Scholar
  13. 13.
    Stroupe, A., Balch, T.: Value-based action selection for observation with robot teams using probabilistic techniques. J. Robot. Auton. Syst. 50, 85–97 (2004)CrossRefGoogle Scholar
  14. 14.
    Takayama, L., Dooley, D., Ju, W.: Expressing thought: improving robot readability with animation principles. In: Proceedings 6th International Conference Human-Robot Interaction, pp. 69–76. ACM, March 2011Google Scholar
  15. 15.
    Williams, M.-A.: Robot Social ıntelligence. In: Ge, S.S., Khatib, O., Cabibihan, J.-J., Simmons, R., Williams, M.-A. (eds.) ICSR 2012. LNCS, vol. 7621, pp. 45–55. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  16. 16.
    The Fugitive Robot Video, https://www.youtue.com/watch?v=rF_-TmrTan8. IJCAI Best Video Award 2013
  17. 17.
    Mutlu, B.: Nonverbal leakage in robots: communication of intentions through seemingly unintentional behavior. In: 4thACM/IEEE Int’l Conference on Human robot interaction (2009)Google Scholar
  18. 18.
    Terada, K., Shamoto, T., Mei, H., Ito, A.: Reactive movements of non-humanoid robots cause intention attribution in humans. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2007, pp. 3715–3720. IEEE, October 2007Google Scholar
  19. 19.
    Goldman, A.: Desire, ıntention and the simulation theory. In: Malle, B., Moses, L., Baldwin, D. (eds.) Intentions and Intentionality, pp. 207–224. MIT Press, Cambridge (2001)Google Scholar
  20. 20.
    Kögler, H.H., Stueber, K. (eds.): Empathy and Agency: The Problem of Understanding in the Human Sciences. Westview Press, Bolder (2000)Google Scholar
  21. 21.
    Salehie, M., Tahvildari, L.: Self-adaptive software: landscape and research challenges. ACM Trans. Auton. Adapt. Syst., 1–40 (2009)Google Scholar
  22. 22.
    Brooks, R.A.: Intelligence without reason. In: Proceedings of 12th International Joint Conferenceon Artificial Intelligence, Sydney, Australia, August 1991, pp. 569–595 (1991)Google Scholar
  23. 23.
    Engelmore, R.S., Morgan, T. (eds.): Blackboard Systems. Addison-Wesley, Reading (1998)Google Scholar
  24. 24.
    Friedman, M.: Essays in Positive Economics, Chicago (1953)Google Scholar
  25. 25.
    Blumberg, B., Galyean, T.: Multi-level direction of autonomous creatures for real-time virtual environments. In: Proceedings of SIGGRAPH 95 (1995)Google Scholar
  26. 26.
    Henzinger, A., Sifakis, J.: The embedded systems design challenge. In: Proceedings of the 14th International Conference on Formal Methods, pp. 1–15Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.QCIS, University of Technology Sydney and Codex, Stanford UniversityStanfordUSA

Personalised recommendations