Advertisement

Autonomous Robots

, Volume 42, Issue 5, pp 1037–1051 | Cite as

Co-manipulation with a library of virtual guiding fixtures

  • Gennaro Raiola
  • Susana Sanchez Restrepo
  • Pauline Chevalier
  • Pedro Rodriguez-Ayerbe
  • Xavier Lamy
  • Sami Tliba
  • Freek Stulp
Article
  • 297 Downloads
Part of the following topical collections:
  1. Special Issue: Learning for Human-Robot Collaboration

Abstract

Virtual guiding fixtures constrain the movements of a robot to task-relevant trajectories, and have been successfully applied to, for instance, surgical and manufacturing tasks. Whereas previous work has considered guiding fixtures for single tasks, in this paper we propose a library of guiding fixtures for multiple tasks, and propose methods for (1) creating and adding guides based on machine learning; (2) selecting guides on-line based on probabilistic implementation of guiding fixtures; (3) refining existing guides based on an incremental learning method. We demonstrate in an industrial task that a library of guiding fixtures provides an intuitive haptic interface for joint human–robot completion of tasks, and improves performance in terms of task execution time, mental workload and errors.

Keywords

Human robot collaborative tasks in manufacturing Learning from demonstration Virtual fixture 

Notes

Acknowledgements

This project has received funding from DIGITEO
(www.digiteo.fr).

Supplementary material

Supplementary material 1 (mp4 32045 KB)

References

  1. Aarno, D., Ekvall, S., & Kragic, D. (2005). Adaptive virtual fixtures for machine-assisted teleoperation tasks. In ICRA (pp. 897–903).Google Scholar
  2. Abbott, J. J. (2005). Virtual fixtures for bilateral telemanipulation. Ph.D. thesis, Johns Hopkins University.Google Scholar
  3. Abdi, H., & Williams, L. J. (2010). Tukeys honestly significant difference (hsd) test. In N. J. Salkind (Ed.), Encyclopedia of research design (pp. 1–5). Thousand Oaks, CA: Sage.Google Scholar
  4. Amor, H. B., Neumann, G., Kamthe, S., Kroemer, O., & Peters, J. (2014). Interaction primitives for human–robot cooperation tasks. In 2014 IEEE international conference on robotics and automation (ICRA) (pp. 2831–2837). IEEE.Google Scholar
  5. Becker, B. C., Maclachlan, R. A., Lobes, L. A., Hager, G. D., & Riviere, C. N. (2013). Vision-based control of a handheld surgical micromanipulator with virtual fixtures. IEEE Transactions on Robotics, 29(3), 674–683.CrossRefGoogle Scholar
  6. Bettini, A., Marayong, P., Member, S., Lang, S., Okamura, A. M., & Hager, G. D. (2004). Vision assisted control for manipulation using virtual fixtures. In International conference on intelligent robots and systems (IROS) (pp. 1171–1176).Google Scholar
  7. Bowyer, S. A., & y Baena F. R. (2013). Dynamic frictional constraints for robot assisted surgery. In World haptics conference (WHC), 2013 (pp. 319–324).  https://doi.org/10.1109/WHC.2013.6548428.
  8. Bowyer, S. A., Davies, B. L., & y Baena, F. R. (2014). Active constraints/virtual fixtures: A survey. IEEE Transactions on Robotics, 30(1), 138–157.  https://doi.org/10.1109/TRO.2013.2283410.CrossRefGoogle Scholar
  9. Boy, E. S., Burdet, E., Teo, C. L., & Colgate, J. E. (2007). Investigation of motion guidance with scooter cobot and collaborative learning. IEEE Transactions on Robotics, 23(2), 245–255.CrossRefGoogle Scholar
  10. Calinon, S. (2007). Incremental learning of gestures by imitation in a humanoid robot. In Proceedings of the 2007 ACM/IEEE international conference on human–robot interaction (pp. 255–262).Google Scholar
  11. Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing and generalizing a task in a humanoid robot. IEEE Transactions on Systems, Man and Cybernetics, 37(2), 286–298. (Special issue on robot learning by observation, demonstration and imitation).CrossRefGoogle Scholar
  12. Calinon, S., Bruno, D., & Caldwell, D. G. (2014). A task-parameterized probabilistic model with minimal intervention control. In Proceedings of IEEE international conference on robotics and automation (ICRA), Hong Kong, China (pp. 3339–3344).Google Scholar
  13. Colgate, J. E., Peshkin, M. A., & Klostermeyer, S. H. (2003). Intelligent assist devices in industrial applications: A review. In IROS (pp. 2516–2521).Google Scholar
  14. David, O., Russotto, F. X., Simoes, M. D. S., & Measson, Y. (2014). Collision avoidance, virtual guides and advanced supervisory control teleoperation techniques for high-tech construction: Framework design. Automation in Construction, 44, 63–72.CrossRefGoogle Scholar
  15. Davies, B., Jakopec, M., Harris, S. J., Baena, F. R. Y., Barrett, A., Evangelidis, A., et al. (2006). Active-constraint robotics for surgery. Proceedings of the IEEE, 94(9), 1696–1704.  https://doi.org/10.1109/JPROC.2006.880680.CrossRefGoogle Scholar
  16. Dumora, J. (2014). Contribution à linteraction physique homme-robot: application à la comanipulation dobjets de grandes dimensions. Ph.D. thesis, Montpellier 2.Google Scholar
  17. Ewerton, M., Maeda, G., Kollegger, G., Wiemeyer, J., & Peters, J. (2016). Incremental imitation learning of context-dependent motor skills. In 2016 IEEE-RAS 16th international conference on humanoid robots (Humanoids) (pp. 351–358). IEEE.Google Scholar
  18. Girden, E. R. (1992). ANOVA: Repeated measures (Vol. 84). London: Sage.CrossRefGoogle Scholar
  19. Held, L., & Bov, D. S. (2013). Applied statistical inference: Likelihood and Bayes. New York: Springer.Google Scholar
  20. Hermann, M., Pentek, T., & Otto, B. (2016). Design principles for industrie 4.0 scenarios. In 2016 49th Hawaii international conference on system sciences (HICSS) (pp. 3928–3937). IEEE.Google Scholar
  21. Ho, S. C., Hibberd, R. D., & Davies, B. L. (1995). Robot assisted knee surgery. IEEE Engineering in Medicine and Biology Magazine, 14(3), 292–300.  https://doi.org/10.1109/51.391774.CrossRefGoogle Scholar
  22. Joly, L., & Andriot, C. (1995). Imposing motion constraints to a force reflecting telerobot through real-time simulation of a virtual mechanism. In 1995 IEEE international conference on robotics and automation, 1995. Proceedings (Vol. 1, pp. 357–362).  https://doi.org/10.1109/ROBOT.1995.525310.
  23. Kuang, A., Payandeh, S., Zheng, B., Henigman, F., & MacKenzie, C. (2004). Assembling virtual fixtures for guidance in training environments. In 12th International symposium on haptic interfaces for virtual environment and teleoperator systems, 2004. HAPTICS ’04. Proceedings (pp. 367–374).  https://doi.org/10.1109/HAPTIC.2004.1287223.
  24. Lee, D., & Ott, C. (2011). Incremental kinesthetic teaching of motion primitives using the motion refinement tube. Autonomous Robots, 31(2–3), 115–131.CrossRefGoogle Scholar
  25. Li, M., & Okamura, A. M. (2003). Recognition of operator motions for real-time assistance using virtual fixtures. In Proceedings of 11th symposium on haptic interfaces for virtual environments and teleoperator systems (pp. 125–131).Google Scholar
  26. Lin, H. C., Marayong, P., Mills, K., Karam, R., Kazanzides, P., Okamura, A. M., & Hager, G. D. (2006). Portability and applicability of virtual fixtures across medical and manufacturing tasks. In IEEE international conference on robotics and automation (pp. 225–340).Google Scholar
  27. Marayong, P., Li, M., Okamura, A. M., & Hager, G. D. (2003). Spatial motion constraints: Theory and demonstrations for robot guidance using virtual fixtures. In ICRA (pp. 1954–1959). IEEE.Google Scholar
  28. Medina, J. R., Lee, D., & Hirche, S. (2012). Risk-sensitive optimal feedback control for haptic assistance. In 2012 IEEE international conference on robotics and automation (ICRA) (pp. 1025–1031). IEEE.Google Scholar
  29. Mollard, Y., Munzer, T., Baisero, A., Toussaint, M., & Lopes, M. (2015). Robot programming from demonstration, feedback and transfer. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 1825–1831).  https://doi.org/10.1109/IROS.2015.7353615.
  30. Nolin, J. T., Stemniski, P. M., & Okamura, A. M. (2003). Activation cues and force scaling methods for virtual fixtures. In Proceedings of 11th international symposium on haptic interfaces for virtual environment and teleoperator systems (pp. 404–409).Google Scholar
  31. Pezzementi, Z., Hager, G. D., & Okamura, A. M. (2007). Dynamic guidance with pseudoadmittance virtual fixtures. In IEEE international conference on robotics and automation (pp. 1761–1767).Google Scholar
  32. Raiola, G. (2017). Co-manipulation with a library of virtual guides. Ph.D. thesis, Université Paris-Saclay.Google Scholar
  33. Raiola, G., Lamy, X., & Stulp, F. (2015a). Co-manipulation with multiple probabilistic virtual guides. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 7–13). IEEE.Google Scholar
  34. Raiola, G., Rodriguez-Ayerbe, P., Lamy, X., Tliba, S., & Stulp, F. (2015b). Parallel guiding virtual fixtures: Control and stability. In 2015 IEEE international symposium on intelligent control (ISIC) (pp. 53–58). IEEE.Google Scholar
  35. Rosenberg, L. (1993). Virtual fixtures: Perceptual tools for telerobotic manipulation. In Proceedings of IEEE virtual reality international sympsoium.Google Scholar
  36. Rozo, L., Calinon, S., Caldwell, D. G., Jimnez, P., & Torras, C. (2016). Learning physical collaborative robot behaviors from human demonstrations. IEEE Transactions on Robotics, 32(3), 513–527.  https://doi.org/10.1109/TRO.2016.2540623.CrossRefGoogle Scholar
  37. Ryden, F., Stewart, A., & Chizeck, H. (2013). Advanced telerobotic underwater manipulation using virtual fixtures and haptic rendering. Oceans-San Diego, 2013, 1–8.Google Scholar
  38. Sanchez Restrepo, S., Raiola, G., Chevalier, P., Lamy, X., & Sidobre, D. (2017). Iterative virtual guides programming for human–robot comanipulation. In IEEE international conference on advanced intelligent mechatronics (AIM).Google Scholar
  39. Vakanski, A., Mantegh, I., Irish, A., & Janabi-Sharifi, F. (2012). Trajectory learning for robot programming by demonstration using hidden markov model and dynamic time warping. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 42(4), 1039–1052.  https://doi.org/10.1109/TSMCB.2012.2185694.CrossRefGoogle Scholar
  40. Wrede, S., Emmerich, C., Grünberg, R., Nordmann, A., Swadzba, A., & Steil, J. (2013). A user study on kinesthetic teaching of redundant robots in task and configuration space. Journal of Human–Robot Interaction, 2(1), 56–81.Google Scholar
  41. Yoon, H., Wang, R., & Hutchinson, S. (2014). Modeling user’s driving-characteristics in a steering task to customize a virtual fixture based on task-performance. In 2014 IEEE international conference on robotics and automation (ICRA) (pp. 625–630).  https://doi.org/10.1109/ICRA.2014.6906920.
  42. Yu, W., Alqasemi, R., Dubey, R., & Pernalete, N. (2005). Telemanipulation assistance based on motion intention recognition. In Proceedings of the 2005 IEEE international conference on robotics and automation, 2005. ICRA 2005 (pp. 1121–1126).  https://doi.org/10.1109/ROBOT.2005.1570266.

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  1. 1.Robotics and Computer VisionENSTA-ParisTechPalaiseauFrance
  2. 2.CEA-ListGif-sur-YvetteFrance
  3. 3.Univ. Paris-Sud, CNRS, CentraleSupelecGif-sur-YvetteFrance
  4. 4.FLOWERS Research Team, INRIABordeaux Sud-OuestFrance
  5. 5.German Aerospace Center (DLR)Institute of Robotics and MechatronicsWesslingGermany

Personalised recommendations