Advertisement

Autonomous Robots

, Volume 43, Issue 6, pp 1291–1307 | Cite as

Kinesthetic teaching and attentional supervision of structured tasks in human–robot interaction

  • Riccardo Caccavale
  • Matteo SaverianoEmail author
  • Alberto Finzi
  • Dongheui Lee
Article
Part of the following topical collections:
  1. Special Issue: Learning for Human-Robot Collaboration

Abstract

We present a framework that allows a robot manipulator to learn how to execute structured tasks from human demonstrations. The proposed system combines physical human–robot interaction with attentional supervision in order to support kinesthetic teaching, incremental learning, and cooperative execution of hierarchically structured tasks. In the proposed framework, the human demonstration is automatically segmented into basic movements, which are related to a task structure by an attentional system that supervises the overall interaction. The attentional system permits to track the human demonstration at different levels of abstraction and supports implicit non-verbal communication both during the teaching and the execution phase. Attention manipulation mechanisms (e.g. object and verbal cueing) can be exploited by the teacher to facilitate the learning process. On the other hand, the attentional system permits flexible and cooperative task execution. The paper describes the overall system architecture and details how cooperative tasks are learned and executed. The proposed approach is evaluated in a human–robot co-working scenario, showing that the robot is effectively able to rapidly learn and flexibly execute structured tasks.

Keywords

Multimodal human–robot interaction Attentional supervision Learning from demonstration Intuitive kinesthetic teaching 

Notes

Acknowledgements

The research leading to these results has been partially supported by the Technical University of Munich, International Graduate School of Science and Engineering, by Helmholtz Association, by RoMoLo project, MISE F/050277/01–02–X32, under EU-funded Actions for R&D, and by MARsHaL project founded by DIETI.

Supplementary material

10514_2018_9706_MOESM1_ESM.mp4 (16.9 mb)
Supplementary material 1 (mp4 17282 KB)

References

  1. Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469–483.CrossRefGoogle Scholar
  2. Belardinelli, A., Pirri, F., & Carbone, A. (2007). Bottom-up gaze shifts and fixations learning by imitation. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 37(2), 256–271.CrossRefGoogle Scholar
  3. Bischoff, R., Kurth, J., Schreiber, G., Koeppe, R., Albu-Schäffer, A., Beyer, A., Eiberger, O., Haddadin, S., Stemmer, A., Grunwald, G., & Hirzinger, G. (2010). The KUKA-DLR lightweight robot arm—A new reference platform for robotics research and manufacturing. In International symposium on robotics (pp. 1–8).Google Scholar
  4. Borji, A., Ahmadabadi, M. N., Araabi, B. N., & Hamidi, M. (2010). Online learning of task-driven object-based visual attention control. Image and Vision Computing, 28(7), 1130–1145.CrossRefGoogle Scholar
  5. Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D. (2001). Conflict monitoring and cognitive control. Psychological Review, 108(3), 624.CrossRefGoogle Scholar
  6. Breazeal, C., & Berlin M. (2008). Spatial scaffolding for sociable robot learning. In Proceedings of AAAI-2008 (pp. 1268–1273).Google Scholar
  7. Broquère, X., Finzi, A., Mainprice, J., Rossi, S., Sidobre, D., & Staffa, M. (2014). An attentional approach to human robot interactive manipulation. International Journal of Social Robotics, 6(4), 533–553.CrossRefGoogle Scholar
  8. Caccavale, R., Cacace, J., Fiore, M., Alami, R., & Finzi, A. (2016). Attentional supervision of human–robot collaborative plans. In RO-MAN (pp. 867–873).Google Scholar
  9. Caccavale, R., & Finzi, A. (2015). Plan execution and attentional regulations for flexible human–robot interaction. In Proceedings of SMC, 2015 (pp. 2453–2458).Google Scholar
  10. Caccavale, R., & Finzi, A. (2016). Flexible task execution and attentional regulations in human-robot interaction. IEEE Transactions on Cognitive and Developmental Systems, 9(1), 68–79.CrossRefGoogle Scholar
  11. Caccavale, R., Leone, E., Lucignano, L., Rossi, S., Staffa, M., & Finzi, A. (2014). Attentional regulations in a situated human–robot dialogue. In: RO-MAN (pp. 844–849).Google Scholar
  12. Cooper, R. P., & Shallice, T. (2006). Hierarchical schemas and goals in the control of sequential behavior. Psychological Review, 113(4), 887–916.CrossRefGoogle Scholar
  13. De Maria, G., Falco, P., Natale, C., & Pirozzi, S. (2015). Integrated force/tactile sensing: The enabling technology for slipping detection and avoidance. In International Conference on Robotics and Automation (pp. 3883–3889).Google Scholar
  14. Dillmann, R. (2004). Teaching and learning of robot tasks via observation of human performance. Robotics and Autonomous Systems, 47(2), 109–116.MathSciNetCrossRefGoogle Scholar
  15. Fod, A., Matarić, M. J., & Jenkins, O. C. (2002). Automated derivation of primitives for movement classification. Autonomous Robots, 12(1), 39–54.zbMATHCrossRefGoogle Scholar
  16. Garrido-Jurado, S., Muñoz Salinas, R., Madrid-Cuevas, F. J., & Marín-Jiménez, M. J. (2014). Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47(6), 2280–2292.CrossRefGoogle Scholar
  17. Ijspeert, A., Nakanishi, J., Pastor, P., Hoffmann, H., & Schaal, S. (2013). Dynamical Movement Primitives: Learning attractor models for motor behaviors. Neural Computation, 25(2), 328–373.MathSciNetzbMATHCrossRefGoogle Scholar
  18. Kawamura, K., Gordon, S. M., Erdemir, E., & Hall, J. (2007). Implementation of cognitive control for a humanoid robot. International Journal of Humanoid Robotics, 5(4), 547–586.CrossRefGoogle Scholar
  19. Koppula, H. S., & Saxena, A. (2015). Anticipating human activities using object affordances for reactive robotic response. Transactions on Pattern Analysis and Machine Intelligence, 38(1), 14–29.CrossRefGoogle Scholar
  20. Kulić, D., Ott, C., Lee, D., Ishikawa, J., & Nakamura, Y. (2012). Incremental learning of full body motion primitives and their sequencing through human motion observation. International Journal of Robotics Research, 31(3), 330–345.CrossRefGoogle Scholar
  21. Lee, D., & Ott, C. (2011). Incremental kinesthetic teaching of motion primitives using the motion refinement tube. Autonomous Robots, 31(2), 115–131.CrossRefGoogle Scholar
  22. Magnanimo, V., Saveriano, M., Rossi, S., & Lee, D. (2014). A bayesian approach for task recognition and future human activity prediction. In International symposium on robot and human interactive communication (pp. 726–731).Google Scholar
  23. Manschitz, S., Kober, J., Gienger, M., & Peters, J. (2015). Learning movement primitive attractor goals and sequential skills from kinesthetic demonstrations. Robotics and Autonomous Systems, 74(Part A), 97–107.Google Scholar
  24. Nagai, Y. (2009). From bottom-up visual attention to robot action learning. In Proceedings of international conference on development and learning (pp. 1–6).Google Scholar
  25. Nau, D. S., Au, T., Ilghami, O., Kuter, U., Murdock, J. W., Wu, D., et al. (2003). SHOP2: An HTN planning system. Journal of Artificial Intelligence Research (JAIR), 20, 379–404.zbMATHCrossRefGoogle Scholar
  26. Nicolescu, M. N., & Mataric, M. J. (2003). Natural methods for robot task learning: Instructive demonstrations, generalization and practice. In Proceedings of the second international joint conference on autonomous agents and multiagent systems, ACM (pp. 241–248).Google Scholar
  27. Norman, D. A., & Shallice, T. (1986). Attention to action: Willed and automatic control of behavior. In R. J. Davidson, G. E. Schwartz, D. Shapiro (Eds.), Consciousness and self-regulation: Advances in research and theory (Vol. 4, pp. 1–18).Google Scholar
  28. Park, D. H., Hoffmann, H., Pastor, P., & Schaal, S. (2008). Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields. In International conference on humanoid robotics (pp. 91–98).Google Scholar
  29. Pastor, P., Kalakrishnan, M., Righetti, L., & Schaal, S. (2012). Towards associative skill memories. In International conference on humanoid robots (pp. 309–315).Google Scholar
  30. Ramirez-Amaro, K., Beetz, M., & Cheng, G. (2015). Understanding the intention of human activities through semantic perception: Observation, understanding and execution on a humanoid robot. Advanced Robotics, 29(5), 345–362.CrossRefGoogle Scholar
  31. Roa, MA., Argus, MJ., Leidner, D., Borst, C., & Hirzinger, G. (2012). Power grasp planning for anthropomorphic robot hands. In International conference on robotics and automation (pp. 563–569).Google Scholar
  32. Rossi, S., Leone, E., Fiore, M., Finzi, A., & Cutugno, F. (2013). An extensible architecture for robust multimodal human–robot communication. In Proceedings of IROS-2013 (pp. 2208–2213).Google Scholar
  33. Saveriano, M., Lee, D (2013). Point cloud based dynamical system modulation for reactive avoidance of convex and concave obstacles. In International conference on intelligent robots and systems (pp. 5380–5387).Google Scholar
  34. Saveriano, M., & Lee, D. (2014). Distance based dynamical system modulation for reactive avoidance of moving obstacles. In Proceedings of ICRA-2014 (pp. 5618–5623).Google Scholar
  35. Saveriano, M., An, S., & Lee, D. (2015). Incremental kinesthetic teaching of end-effector and null-space motion primitives. ICRA, 2015, 3570–3575.Google Scholar
  36. Saveriano, M., Hirt, F., & Lee, L. (2017). Human-aware motion reshaping using dynamical systems. Pattern Recognition Letters, 99(11), 96–104.CrossRefGoogle Scholar
  37. Takano, W., & Nakamura, Y. (2016). Real-time unsupervised segmentation of human whole-body motion and its application to humanoid robot acquisition of motion symbols. Robotics and Autonomous Systems, 75(Part B), 260–272.Google Scholar
  38. Tenorth, M., & Beetz, M. (2013). Knowrob—A knowledge processing infrastructure for cognition-enabled robots. International Journal of Robotics Research, 32(5), 566–590.CrossRefGoogle Scholar
  39. Wächter, M., Schulz, S., Asfour, T., Aksoy, E., Wörgötter, & Dillmann, R. (2013). Action sequence reproduction based on automatic segmentation and object-action complexes. In International conference on humanoid robots (pp. 189–195).Google Scholar
  40. Zoliner, R., Pardowitz, M., Knoop, S., & Dillmann, R. (2005). Towards cognitive robots: Building hierarchical task representations of manipulations from human demonstration. In International conference on robotics and automation (pp. 1535–1540).Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Dipartimento di Ingegneria Elettrica e Tecnologie dell’InformazioneUniversità di Napoli Federico IINaplesItaly
  2. 2.Human-Centered Assistive Robotics (HCR)Technical University of MunichMunichGermany
  3. 3.Institute of Robotics and MechatronicsGerman Aerospace Center (DLR)WeßlingGermany

Personalised recommendations