Advertisement

Applications in HHI: Physical Cooperation

  • Markus Rickert
  • Andre Gaschler
  • Alois Knoll
Reference work entry

Abstract

Humans critically depend on permanent verbal and nonverbal interaction – for aligning their mental states, for synchronizing their intentions and goals, and also for performing joint tasks, such as carrying a heavy object together, manipulating of objects in a common workspace, or handing over components and building or assembling larger structures in teams. Typically, physical interaction is initiated by a short joint planning dialog and then further accompanied by a stream of verbal utterances. For obtaining a smooth interaction flow in a given situation, humans typically use all their communication modalities and senses, and this often happens even unconsciously. As we move toward the introduction of robotic co-workers that serve humans – some of them will be humanoids; others will be of a different shape – humans will expect them to be integrated into the execution of the task at hand, just as well as if a human co-worker was involved. Such a flawless replacement will only be possible if these robots provide a number of basic action primitives, for example, handover from human to robot and vice versa. The robots must also recognize and anticipate the intention of the human by analyzing and understanding the scene as far as necessary for jointly working on the task. Most importantly, the robotic co-worker must be able to carry on a verbal and nonverbal dialog with the human partner, in parallel with and relating to the physical interaction process. In this chapter, we give an overview of the ingredients of an integrated physical interaction scenario. This includes methods to plan activities, to produce safe and human-interpretable motion, to interact through multimodal communication, to schedule actions for a joint task, and to align and synchronize the interaction by understanding human intentions. We summarize the state of the art in physical human-humanoid interaction systems and conclude by presenting three humanoid systems as case studies.

References

  1. 1.
    R. Ambrose, H. Aldridge, R. Askew, R.R. Burridge, W. Bluethmann, M. Diftler, C. Lovchik, D. Magruder, F. Rehnmark, Robonaut: NASA’s space humanoid. IEEE Intell. Syst. Appl. 15(4), 57–63 (2000).  https://doi.org/10.1109/5254.867913 CrossRefGoogle Scholar
  2. 2.
    R.C. Arkin, Behavior-Based Robotics. Intelligent Robotics and Autonomous Agents (MIT Press, Cambridge, 1998)Google Scholar
  3. 3.
    L. Aryananda, J. Weber, MERTZ: A quest for a robust and scalable active vision humanoid head robot, in Proceedings of the IEEE/RAS International Conference on Humanoid Robots, Santa Monica, 2004, pp. 513–532.  https://doi.org/10.1109/ICHR.2004.1442668
  4. 4.
    E.G. Bard, R.L. Hill, M.E. Foster, M. Araia, Tuning accessibility of referring expressions in situated dialogue. Lang. Cogn. Neurosci. 29(8), 928–949 (2014).  https://doi.org/10.1080/23273798.2014.895845 CrossRefGoogle Scholar
  5. 5.
    H. Bekkering, E.R.A. de Bruijn, R.H. Cuijpers, R. Newman-Norlund, H.T. van Schie, R. Meulenbroek, Joint action: neurocognitive mechanisms supporting human interaction. Top. Cogn. Sci. 1(2), 340–352 (2009).  https://doi.org/10.1111/j.1756-8765.2009.01023.x CrossRefGoogle Scholar
  6. 6.
    M. Bender, M. Braun, P. Rally, O. Scholtz, Lightweight Robots in Manual Assembly – Best to Start Simply! Examining Companies’ Initial Experiences with Lightweight Robots. Fraunhofer Institute for Industrial Engineering, Stuttgart, 2016Google Scholar
  7. 7.
    E. Bicho, L. Louro, N. Hipólito, W. Erlhagen, A dynamic field approach to goal inference and error monitoring for human-robot interaction, in Proceedings of the Symposium on New Frontiers in Human-Robot Interaction, Adaptive and Emergent Behaviour and Complex Systems Convention, Edinburgh, 2009, pp. 31–37Google Scholar
  8. 8.
    A. Billard, S. Calinon, R. Dillmann, S. Schaal, Robot programming by demonstration, chap. 59, in Springer Handbook of Robotics, ed. by B. Siciliano, O. Khatib, 1st edn. (Springer, Heidelberg, 2008), pp. 1371–1394CrossRefGoogle Scholar
  9. 9.
    C. Breazeal, A. Brooks, D. Chilongo, J. Gray, G. Hoffman, C. Kidd, H. Lee, J. Lieberman, A. Lockerd, Working collaboratively with humanoid robots, in Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Santa Monica, 2004, pp. 253–272.  https://doi.org/10.1109/ICHR.2004.1442126
  10. 10.
    A. van Breemen, X. Yan, B. Meerbeek, iCat: an animated user-interface robot with personality, in Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, Utrecht, 2005, pp. 143–144.  https://doi.org/10.1145/1082473.1082823
  11. 11.
    M. Brenner, I. Kruijff-Korbayová, A continual multiagent planning approach to situated dialogue, in Proceedings of the Workshop on Semantics and Pragmatics of Dialogue, London, 2008Google Scholar
  12. 12.
    J. Carletta, R.L Hill, C. Nicol, T. Taylor, J.P. de Ruiter, E.G. Bard, Eyetracking for two-person tasks with manipulation of a virtual world. Behav. Res. Methods 42(1), 254–265 (2010).  https://doi.org/10.3758/BRM.42.1.254 CrossRefGoogle Scholar
  13. 13.
    H.H Clark, Pointing and placing, in Pointing: Where Language, Culture, and Cognition Meet, ed. by S. Kita (Lawrence Erlbaum Associates, Mahwah, 2003)Google Scholar
  14. 14.
    H.H. Clark, D. Wilkes-Gibbs, Referring as a collaborative process. Cognition 22(1), 1–39 (1986).  https://doi.org/10.1016/0010-0277(86)90010-7 CrossRefGoogle Scholar
  15. 15.
    R. Dale, E. Reiter, Computational interpretations of the Gricean maxims in the generation of referring expressions. Cogn. Sci. 19(2), 233–263 (1995).  https://doi.org/10.1207/s15516709cog1902_3 CrossRefGoogle Scholar
  16. 16.
    M.A. Diftler, J.S. Mehling, M.E. Abdallah, N.A. Radford, L.B. Bridgwater, A.M. Sanders, R.S. Askew, D.M. Linn, J.D. Yamokoski, F.A. Permenter, B.K. Hargrave, R. Platt, R.T. Savely, R.O. Ambrose, Robonaut 2 – the first humanoid robot in space, in Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, 2011, pp. 2178–2183.  https://doi.org/10.1109/ICRA.2011.5979830
  17. 17.
    A. Edsinger, Robot manipulation in human environments. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, 2007Google Scholar
  18. 18.
    A. Edsinger, C.C. Kemp, Two arms are better than one: a behavior based control system for assistive bimanual manipulation, in Recent Progress in Robotics: Viable Robotic Service to Human, vol. 370, ed. by S. Lee, I.H. Suh, M.S. Kim, Lecture Notes in Control and Information Sciences (Springer, Berlin/Heidelberg, 2007), pp. 345–355.  https://doi.org/10.1007/978-3-540-76729-9_27
  19. 19.
    M.E. Foster, E.G. Bard, R.L. Hill, M. Guhe, J. Oberlander, A. Knoll, The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue, in Proceedings of the ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, 2008, pp. 295–302.  https://doi.org/10.1145/1349822.1349861
  20. 20.
    M.E. Foster, A. Gaschler, M. Giuliani, How can I help you? Comparing engagement classification strategies for a robot bartender, in Proceedings of the International Conference on Multimodal Interaction, Sydney, 2013, pp. 255–262.  https://doi.org/10.1145/2522848.2522879
  21. 21.
    M.E. Foster, A. Gaschler, M. Giuliani, A. Isard, M. Pateraki, R.P.A. Petrick, Two people walk into a bar: dynamic multi-party social interaction with a robot agent, in Proceedings of the ACM International Conference on Multimodal Interaction, Santa Monica, 2012, pp. 3–10.  https://doi.org/10.1145/2388676.2388680
  22. 22.
    M.E. Foster, M. Giuliani, A. Isard, Task-based evaluation of context-sensitive referring expressions in human-robot dialogue. Lang. Cogn. Neurosci. 29(8), 1018–1034 (2014).  https://doi.org/10.1080/01690965.2013.855802 CrossRefGoogle Scholar
  23. 23.
    M.E. Foster, M. Giuliani, T. Müller, M. Rickert, A. Knoll, W. Erlhagen, E. Bicho, N. Hipólito, L. Louro, Combining goal inference and natural-language dialogue for human-robot joint action, in Proceedings of the International Workshop on Combinations of Intelligent Methods and Applications, European Conference on Artificial Intelligence, Patras, 2008Google Scholar
  24. 24.
    M.E. Foster, C. Matheson, Following assembly plans in cooperative, task-based human-robot dialogue, in Proceedings of the Workshop on the Semantics and Pragmatics of Dialogue, London, 2008Google Scholar
  25. 25.
    A. Gaschler, S. Jentzsch, M. Giuliani, K. Huth, J. de Ruiter, A. Knoll, Social behavior recognition using body posture and head pose for human-robot interaction, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, 2012, pp. 2128–2133.  https://doi.org/10.1109/IROS.2012.6385460
  26. 26.
    A. Gaschler, I. Kessler, R.P.A. Petrick, A. Knoll, Extending the knowledge of volumes approach to robot task planning with efficient geometric predicates, in Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, 2015, pp. 3061–3066.  https://doi.org/10.1109/ICRA.2015.7139619
  27. 27.
    A. Gaschler, R.P.A. Petrick, M. Giuliani, M. Rickert, A. Knoll, KVP: a knowledge of volumes approach to robot task planning, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, 2013, pp. 202–208.  https://doi.org/10.1109/IROS.2013.6696354
  28. 28.
    M. Giuliani, A. Knoll, MultiML – a general purpose representation language for multimodal human utterances, in Proceedings of the IEEE International Conference on Multimodal Interfaces, Chania, 2008, pp. 165–172.  https://doi.org/10.1145/1452392.1452424
  29. 29.
    M. Giuliani, R. Petrick, M.E. Foster, A. Gaschler, A. Isard, M. Pateraki, M. Sigalas, Comparing task-based and socially intelligent behaviour in a robot bartender, in Proceedings of the ACM International Conference on Multimodal Interaction, Sydney, 2013, pp. 263–270.  https://doi.org/10.1145/2522848.2522869
  30. 30.
    S. Glasauer, M. Huber, P. Basili, A. Knoll, T. Brandt, Interacting in time and space: investigating human-human and human-robot joint action, in Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Viareggio, 2010, pp. 252–257.  https://doi.org/10.1109/ROMAN.2010.5598638
  31. 31.
    S. Haddadin, A. Albu-Schäffer, G. Hirzinger, Requirements for safe robots: measurements, analysis and new insights. Int. J. Robot. Res. 28(11–12), 1507–1527 (2009).  https://doi.org/10.1177/0278364909343970 CrossRefGoogle Scholar
  32. 32.
    M. Henning, A new approach to object-oriented middleware. IEEE Internet Comput. 8(1), 66–75 (2004).  https://doi.org/10.1109/MIC.2004.1260706 MathSciNetCrossRefGoogle Scholar
  33. 33.
    H. Hirukawa, Kanehiro, K. Kaneko, S. Kajita, K. Fujiwara, Y. Kawai, F. Tomita, S. Hirai, K. Tanie, T. Isozumi, K. Akachi, T. Kawasaki, S. Ota, K. Yokoyama, H. Handa, Y. Fukase, J. ichiro Maeda, Y. Nakamura, S. Tachi, H. Inoue, Humanoid robotics platforms developed in HRP. Robot. Auton. Syst. 48(4), 165–175 (2004).  https://doi.org/10.1016/j.robot.2004.07.007 CrossRefGoogle Scholar
  34. 34.
    M. Huber, A. Knoll, T. Brandt, S. Glasauer, When to assist? – modelling human behaviour for hybrid assembly systems, in Proceedings of the International Symposium and the German Conference on Robotics, Munich, 2010, pp. 165–170Google Scholar
  35. 35.
    M. Huber, H. Radrich, C. Wendt, M. Rickert, A. Knoll, T. Brandt, S. Glasauer, Evaluation of a novel biologically inspired trajectory generator in human-robot interaction, in Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Toyama, 2009, pp. 639–644.  https://doi.org/10.1109/ROMAN.2009.5326233
  36. 36.
    M. Huber, M. Rickert, A. Knoll, T. Brandt, S. Glasauer, Human-robot interaction in handing-over tasks, in Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Munich, 2008, pp. 107–112.  https://doi.org/10.1109/ROMAN.2008.4600651
  37. 37.
    M. Hulstijn, R. Meulenbroek, M. Wijers, J.P. de Ruiter, A frequency analysis of joint-action primitives. Deliverable D2.3, EU FP6 IST Cognitive Systems Integrated Project JAST (FP6-003747-IP) (2005)Google Scholar
  38. 38.
    S. Jentzsch, A. Gaschler, O. Khatib, A. Knoll, MOPL: a multi-modal path planner for generic manipulation tasks, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, 2015, pp. 6208–6214.  https://doi.org/10.1109/IROS.2015.7354263
  39. 39.
    M. Jindai, S. Shibata, T. Yamamoto, A. Shimizu, A study on robot-human system with consideration of individual preferences. JSME Int. J. Ser. C Mech. Syst. Mach. Elem. Manuf. 46(3), 1075–1083 (2003).  https://doi.org/10.1299/jsmec.46.1075 CrossRefGoogle Scholar
  40. 40.
    H. Kazerooni, Exoskeletons for human performance augmentation, chap. 33, in Springer Handbook of Robotics, , 1st edn. ed. by B. Siciliano, O. Khatib (Springer, Heidelberg, 2008), pp. 773–793CrossRefGoogle Scholar
  41. 41.
    F.P. Kirgis, P. Katsos, M. Kohlmaier, Collaborative robotics, in Robotic Fabrication in Architecture, Art and Design, ed. by D. Reinhardt, R. Saunders, J. Burry (Springer, Cham, 2016), pp. 448–453.  https://doi.org/10.1007/978-3-319-26378-6_36 CrossRefGoogle Scholar
  42. 42.
    A. Knoll, B. Hildenbrandt, J. Zhang, Instructing cooperating assembly robots through situated dialogues in natural language, in Proceedings of the IEEE International Conference on Robotics and Automation, Albuquerque, 1997, pp. 888–894.  https://doi.org/10.1109/ROBOT.1997.620146
  43. 43.
    S. Kock, T. Vittor, B. Matthias, H. Jerregard, M. Kllman, I. Lundberg, R. Mellander, M. Hedelind, Robot concept for scalable, flexible assembly automation: a technology study on a harmless dual-armed robot, in Proceedings of the IEEE International Symposium on Assembly and Manufacturing, Tampere, 2011.  https://doi.org/10.1109/ISAM.2011.5942358
  44. 44.
    K. Kosuge, M. Sato, N. Kazamura, Mobile robot helper, in Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, 2000, pp. 583–588.  https://doi.org/10.1109/ROBOT.2000.844116
  45. 45.
    K. Lay, E. Prassler, R. Dillmann, G. Grunwald, M. Hägele, G. Lawitzky, A. Stopp, W. von Seelen, MORPHA: communication and interaction with intelligent, anthropomorphic robot assistants, in Tagungsband Statustage Leitprojekte Mensch-Technik-Interaktion in der Wissensgesellschaft, Saarbrücken, 2001Google Scholar
  46. 46.
    S. Loth, K. Huth, J.P. de Ruiter, Automatic detection of service initiation signals used in bars. Front. Psychol. 4(557) (2013).  https://doi.org/10.3389/fpsyg.2013.00557
  47. 47.
    S. Loth, K. Jettka, M. Giuliani, J.P. de Ruiter, Ghost-in-the-machine reveals human social signals for human-robot interaction. Front. Psychol. 6(1641) (2015).  https://doi.org/10.3389/fpsyg.2015.01641
  48. 48.
    W.C. Mann, S.A Thompson, Rhetorical structure theory: toward a functional theory of text organization. Text 8(3), 243–281 (1988).  https://doi.org/10.1515/text.1.1988.8.3.243
  49. 49.
    N. Mansard, O. Stasse, P. Evrard, A. Kheddar, A versatile generalized inverted kinematics implementation for collaborative working humanoid robots: the stack of tasks, in Proceedings of the International Conference on Advanced Robotics, Munich, 2009, pp. 1–6Google Scholar
  50. 50.
    D. McNeill, Hand and Mind: What Gestures Reveal about Thought (University of Chicago Press, Chicago, 1992)Google Scholar
  51. 51.
    R.G.J. Meulenbroek, J. Bosga, M. Hulstijn, S. Miedl, Joint-action coordination in transferring objects. Exp. Brain Res. 180(2), 333–343 (2007).  https://doi.org/10.1007/s00221-007-0861-z CrossRefGoogle Scholar
  52. 52.
    T. Müller, P. Ziaie, A. Knoll, A wait-free realtime system for optimal distribution of vision tasks on multicore architectures, in Proceedings of the International Conference on Informatics in Control, Automation and Robotics, Funchal, 2008, pp. 301–306Google Scholar
  53. 53.
    G. Panin, A. Ladikos, A. Knoll, An efficient and robust real-time contour tracking system, in Proceedings of the IEEE International Conference on Computer Vision Systems, New York, 2006, pp. 44–51.  https://doi.org/10.1109/ICVS.2006.13
  54. 54.
    R. Petrick, M.E. Foster, Planning for social interaction in a robot bartender domain, in Proceedings of the International Conference on Automated Planning and Scheduling, Rome, 2013, pp. 389–397Google Scholar
  55. 55.
    R.P.A. Petrick, F. Bacchus, Extending the knowledge-based approach to planning with incomplete information and sensing, in Proceedings of the International Conference on Automated Planning and Scheduling, Whistler, 2004, pp. 2–11Google Scholar
  56. 56.
    G.A. Pratt, M.M. Williamson, Series elastic actuators, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Pittsburgh, 1995, pp. 399–406.  https://doi.org/10.1109/IROS.1995.525827
  57. 57.
    M. Rickert, Efficient motion planning for intuitive task execution in modular manipulation systems. Dissertation, Technical University of Munich, Munich, 2011Google Scholar
  58. 58.
    M. Rickert, M.E. Foster, M. Giuliani, T. By, G. Panin, A. Knoll, Integrating language, vision and action for human robot dialog systems, in Proceedings of the International Conference on Universal Access in Human-Computer Interaction (HCI), Beijing. Lecture Notes in Computer Science, vol. 4555 (Springer, 2007), pp. 987–995.  https://doi.org/10.1007/978-3-540-73281-5_108
  59. 59.
    J.P. de Ruiter, The production of gesture and speech, in Language and Gesture, ed. by D. McNeill (Cambridge University Press, Cambridge, 2000), pp. 284–311.  https://doi.org/10.1017/CBO9780511620850 Google Scholar
  60. 60.
    J.P. de Ruiter, A. Bangerter, P. Dings, The interplay between gesture and speech in the production of referring expressions: Investigating the tradeoff hypothesis. Top. Cogn. Sci. 4(2), 232–248 (2012).  https://doi.org/10.1111/j.1756-8765.2012.01183.x CrossRefGoogle Scholar
  61. 61.
    S. Russell, P. Norvig, Classical planning, chap. 10, in Artificial Intelligence: A Modern Approach, 3rd edn. (Pearson, Upper Saddle River, 2010)Google Scholar
  62. 62.
    R.D. Schraft, C. Meyer, C. Parlitz, E. Helms, PowerMate – a safe and intuitive robot assistant for handling and assembly tasks, in Proceedings of the IEEE International Conference on Robotics and Automation, Barcelona, 2005, pp. 4074–4079.  https://doi.org/10.1109/ROBOT.2005.1570745
  63. 63.
    W.C. So, S. Kita, S. Goldin-Meadowa, Using the hands to identify who does what to whom: Gesture and speech go hand-in-hand. Cogn. Sci. 33(1), 115–125 (2009).  https://doi.org/10.1111/j.1551-6709.2008.01006.x CrossRefGoogle Scholar
  64. 64.
    W.T. Townsend, J.K. Salisbury, Mechanical design for whole-arm manipulation, in Robots and Biological Systems: Towards a New Bionics?, vol. 102 NATO ASI Series (Springer, Berlin/Heidelberg, 1993).  https://doi.org/10.1007/978-3-642-58069-7_9 CrossRefGoogle Scholar
  65. 65.
    D.R. Traum, S. Larsson, The information state approach to dialogue management, in Current and New Directions in Discourse and Dialogue, vol. 22 ed. by J. van Kuppevelt, R.W. Smith, Text, Speech and Language Technology (Springer, Dordrecht, 2003), pp. 325–353.  https://doi.org/10.1007/978-94-010-0019-2_15 Google Scholar
  66. 66.
    A.L.P. Ureche, K. Umezawa, Y. Nakamura, A. Billard, Task parameterization using continuous constraints extracted from human demonstrations. IEEE Trans. Robot. 31(6), 1458–1471 (2015).  https://doi.org/10.1109/TRO.2015.2495003 CrossRefGoogle Scholar
  67. 67.
    J. Versace, A review of the severity index. Technical Report 710881, SAE International (1971).  https://doi.org/10.4271/710881
  68. 68.
    K. Yokoyama, H. Handa, T. Isozumi, Y. Fukase, K. Kaneko, F. Kanehiro, Y. Kawai, F. Tomita, H. Hirukawa, Cooperative works by a human and a humanoid robot, in Proceedings of the IEEE International Conference on Robotics and Automation, Taipei, 2003, pp. 2985–2991.  https://doi.org/10.1109/ROBOT.2003.1242049
  69. 69.
    Ziaie P., Müller T., Foster M.E., Knoll A. (2008) A Naïve Bayes Classifier with Distance Weighting for Hand-Gesture Recognition. In: Sarbazi-Azad H., Parhami B., Miremadi SG., Hessabi S. (eds) Advances in Computer Science and Engineering. Communications in Computer and Information Science, vol 6. pp. 308-315, Springer, Berlin, Heidelberg.  https://doi.org/10.1007/978-3-540-89985-3_38 Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.fortiss, An-Institut Technische Universität MünchenMünchenGermany

Personalised recommendations