Advertisement

Autonomous Robots

, Volume 29, Issue 2, pp 201–218 | Cite as

Reliable non-prehensile door opening through the combination of vision, tactile and force feedback

  • Mario Prats
  • Pedro J. Sanz
  • Angel P. del Pobil
Article

Abstract

Whereas vision and force feedback—either at the wrist or at the joint level—for robotic manipulation purposes has received considerable attention in the literature, the benefits that tactile sensors can provide when combined with vision and force have been rarely explored.

In fact, there are some situations in which vision and force feedback cannot guarantee robust manipulation. Vision is frequently subject to calibration errors, occlusions and outliers, whereas force feedback can only provide useful information on those directions that are constrained by the environment. In tasks where the visual feedback contains errors, and the contact configuration does not constrain all the Cartesian degrees of freedom, vision and force sensors are not sufficient to guarantee a successful execution.

Many of the tasks performed in our daily life that do not require a firm grasp belong to this category. Therefore, it is important to develop strategies for robustly dealing with these situations. In this article, a new framework for combining tactile information with vision and force feedback is proposed and validated with the task of opening a sliding door. Results show how the vision-tactile-force approach outperforms vision-force and force-alone, in the sense that it allows to correct the vision errors at the same time that a suitable contact configuration is guaranteed.

Keywords

Sensor-based manipulation Tactile sensing Service robotics 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Albu-Schaffer, A., Eiberger, O., Grebenstein, M., Haddadin, S., Ott, C., Wimbock, T., Wolf, S., & Hirzinger, G. (2008). Soft robotics: From torque feedback controlled light-weight robots to intrinsically compliant systems. IEEE Robotics & Automation Magazine, 15(3), 20–30. doi: 10.1109/MRA.2008.927979. CrossRefGoogle Scholar
  2. Allen, P. K., Miller, A., Oh, P. Y., & Leibowitz, B. (1999). Integration of vision, force and tactile sensing for grasping. International Journal of Intelligent Machines, 4(1), 129–149. Google Scholar
  3. Baeten, J., Bruyninckx, H., & Schutter, J. D. (2003). Integrated vision/force robotic servoing in the task frame formalism. International Journal of Robotics Research, 22(10–11), 941–954. Google Scholar
  4. Broxvall, M., Gritti, M., Saffiotti, A., Seo, B., & Cho, Y. (2006). PEIS ecology: Integrating robots into smart environments. In IEEE international conference on robotics and automation (pp. 212–218). Orlando, FL. Google Scholar
  5. Bruyninckx, H., & Schutter, J. D. (1996). Specification of force-controlled actions in the ‘task frame formalism’: A synthesis. IEEE Transactions on Robotics and Automation, 12(5), 581–589. CrossRefGoogle Scholar
  6. Bruyninckx, H., Schutter, J. D., Lefebvre, T., Gadeyne, K., Soetens, P., Rutgeerts, J., Slaets, P., & Meeussen, W. (2003). Building blocks for slam in autonomous compliant motion. In International symposium of robotics research, Siena, Italy. Google Scholar
  7. Castellanos, J., Neira, J., Strauss, O., & Tardos, J. (1996). Detecting high level features for mobile robot localization. In IEEE international conference on multisensor fusion and integration (pp. 611–618). Washington, DC, USA. Google Scholar
  8. Comport, A. I., Marchand, E., & Chaumette, F. (2004a). Object-based visual 3d tracking of articulated objects via kinematic sets. In CVPRW ’04: Proceedings of the 2004 conference on computer vision and pattern recognition workshop (CVPRW’04) (Vol. 1, p. 2). Washington: IEEE Computer Society. CrossRefGoogle Scholar
  9. Comport, A. I., Marchand, E., & Chaumette, F. (2004b). Robust model-based tracking for robot vision. In IEEE/RSJ int. conf. on intelligent robots and systems, IROS’04 (pp. 692–697). Google Scholar
  10. Comport, A. I., Kragic, D., Marchand, E., & Chaumette, F. (2005). Robust real-time visual tracking: Comparison, theoretical analysis and performance evaluation. In Proc. IEEE int. conference on robotics and automation (pp. 2852–2857). Barcelona, Spain. Google Scholar
  11. Dementhon, D., & Davis, L. (1995). Model-based object pose in 25 lines of code. International Journal of Computer Vision, 15(1/2), 123–141. CrossRefGoogle Scholar
  12. Drumheller, M. (1987). Mobile robot localization using sonar. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9, 325–332. CrossRefGoogle Scholar
  13. Drummond, T., & Cipolla, R. (2002). Real-time visual tracking of complex structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 932–946. CrossRefGoogle Scholar
  14. Dune, C., Marchand, E., Collewet, C., & Leroux, C. (2008). Active rough shape estimation of unknown objects. In IEEE/RSJ int. conf. on intelligent robots and systems (pp. 3622–3627). Nice, France. Google Scholar
  15. Durrant-Whyte, T., & Bailey, H. (2006). Simultaneous localization and mapping: Part I. IEEE Robotics & Automation Magazine, 13(2), 99–110. CrossRefGoogle Scholar
  16. Edsinger, A., & Weber, J. (2004). Domo: a force sensing humanoid robot for manipulation research. In Proc. 4th IEEE/RAS international conference on humanoid robots (Vol. 1, pp. 273–291). Google Scholar
  17. Espiau, B., Chaumette, F., & Rives, P. (1992). A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3), 313–326. CrossRefGoogle Scholar
  18. Hosoda, K., Igarashi, K., & Asada, M. (1996). Hybrid visual servoing/force control in unknown environment. In: IEEE/RSJ international conference on intelligent robots and systems (pp. 1097–1103). Osaka, Japan. Google Scholar
  19. Howe, R. (1994). Tactile sensing and control of robotic manipulation. Journal of Advanced Robotics, 8(3), 245–261. CrossRefGoogle Scholar
  20. Hu, Y., Eagleson, R., & Goodale, M. (1999). Human visual servoing for reaching and grasping: The role of 3-d geometric features. In Proc. IEEE int. conference on robotics and automation (pp. 3209–3216). Detroit, Michigan, USA. Google Scholar
  21. Hutchinson, S., Hager, G., & Corke, P. (1996). A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12(5), 651–670. CrossRefGoogle Scholar
  22. Johansson, R., & Westling, G. (1984). Roles of glabrous skin receptors and sensorimotor memory in automatic control of precision grip when lifting rougher or more slippery objects. Experimental Brain Research, 56, 550–564. CrossRefGoogle Scholar
  23. Khalil, W., & Dombre, E. (2002). Modeling identification and control of robots. St Alban: Hermes Penton Science. Google Scholar
  24. Kragic, D., & Christensen, H. (2002). Model based techniques for robotic servoing and grasping. In IEEE/RSJ international conference on intelligent robots and systems (Vol. 1, pp. 299–304). Google Scholar
  25. Lasky, T., & Hsia, T. (1991). On force-tracking impedance control of robot manipulators. In IEEE international conference on robotics and automation (Vol. 1, pp. 274–280). Sacramento, California. doi: 10.1109/ROBOT.1991.131587.
  26. Lepetit, V., & Fua, P. (2005). Monocular model-based 3d tracking of rigid objects. Foundations and Trends in Computer Graphics and Vision, 1(1), 1–89. CrossRefGoogle Scholar
  27. Marchand, E., & Chaumette, F. (2002). Virtual visual servoing: a framework for real-time augmented reality. In EUROGRAPHICS 2002 (Vol. 21(3), pp. 289–298). Saarebrücken, Germany. Google Scholar
  28. Martinet, P., & Gallice, J. (1999). Position based visual servoing using a nonlinear approach. In IEEE/RSJ int. conf. on intelligent robots and systems (Vol. 1, pp. 531–536). Kyongju, Korea. Google Scholar
  29. Mezouar, Y., Prats, M., & Martinet, P. (2007). External hybrid vision/force control. In Int. conference on advanced robotics (ICAR’07). Jeju, Korea. Google Scholar
  30. Morel, G., & Bidaud, P. (1996). A reactive external force loop approach to control manipulators in the presence of environmental disturbances. In IEEE international conference on robotics and automation (Vol. 2, pp. 1229–1234). Minneapolis, Minnesota, USA. doi: 10.1109/ROBOT.1996.506875.
  31. Morel, G., Malis, E., & Boudet, S. (1998). Impedance based combination of visual and force control. In IEEE international conference on robotics and automation (ICRA’98) (Vol. 2, pp. 1743–1748). Leuven, Belgium. Google Scholar
  32. Nelson, B., & Khosla, P. K. (1996). Force and vision resolvability for assimilating disparate sensory feedback. IEEE Transactions on Robotics and Automation, 12(5), 714–731. CrossRefGoogle Scholar
  33. Nelson, B., Morrow, J., & Khosla, P. (1995). Improved force control through visual servoing. In Proceedings of the American control conference (Vol. 1, pp. 380–386). Google Scholar
  34. Petrovskaya, A., & Ng, A. (2007). Probabilistic mobile manipulation in dynamic environments with application to opening doors. In Int. joint conf. on artificial intelligence. Hyderabad, India. Google Scholar
  35. Petrovskaya, A., Khatib, O., Thrun, S., & Ng, A. (2006). Bayesian estimation for autonomous object manipulation based on tactile sensors. In Proc. IEEE international conference on robotics and automation ICRA 2006 (pp. 707–714). doi: 10.1109/ROBOT.2006.1641793.
  36. Prats, M., Martinet, P., del Pobil, A. P., & Lee, S (2007a). Vision/force control in task-oriented grasping and manipulation. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1320–1325). San Diego, USA. Google Scholar
  37. Prats, M., del Pobil, A. P., & Sanz, P. J. (2007b). Task-oriented grasping using hand preshapes and task frames. In Proc. of IEEE international conference on robotics and automation (pp. 1794–1799). Rome, Italy. Google Scholar
  38. Prats, M., Martinet, P., del Pobil, A. P., & Lee, S. (2008). Robotic execution of everyday tasks by means of external vision/force control. Intelligent Service Robotics, 1(3), 253–266. CrossRefGoogle Scholar
  39. Prats, M., Sanz, P. J., & del Pobil, A. (2010). A framework for compliant physical interaction—the grasp meets the task. Autonomous Robots, 28(1), 89–111. CrossRefGoogle Scholar
  40. Salisbury, J. (1980). Active stiffness control of a manipulator in Cartesian coordinates. In IEEE international conference on decision and control (pp. 95–100). Albuquerque, USA. Google Scholar
  41. Schmid, A. J., Gorges, N., Göger, D., & Wörn, H. (2008). Opening a door with a humanoid robot using multi-sensory tactile feedback. In IEEE international conference on robotics and automation (pp. 285–291). Pasadena, CA. Google Scholar
  42. Se, S., Lowe, D., & Little, J. (2001). Vision-based mobile robot localization and mapping using scale-invariant features. In IEEE international conference on robotics and automation (pp. 2051–2058). Seoul, Korea. Google Scholar
  43. Son, J. S., Howe, R., Wang, J., & Hager, G. (1996). Preliminary results on grasping with vision and touch. In IEEE/RSJ int. conf. on intelligent robots and systems (IROS’96) (Vol. 3, pp. 1068–1075). Osaka, Japan. Google Scholar
  44. Stemmer, A., Schreiber, G., Arbter, K., & Albu-Schäffer, A. (2006). Robust assembly of complex shaped planar parts using vision and force. In IEEE international conference on multisensor fusion and integration for intelligent systems (pp. 493–500). Heidelberg, Germany. Google Scholar
  45. Tegin, J., & Wikander, J. (2005). Tactile sensing in intelligent robotic manipulation—a review. Industrial Robot: An International Journal, 32(1), 64–70. CrossRefGoogle Scholar
  46. Wyrobek, K., Berger, E., Van der Loos, H., & Salisbury, J. (2008). Towards a personal robotics development platform: Rationale and design of an intrinsically safe personal robot. In IEEE international conference on robotics and automation (pp. 2165–2170). doi: 10.1109/ROBOT.2008.4543527.

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Mario Prats
    • 1
  • Pedro J. Sanz
    • 1
  • Angel P. del Pobil
    • 1
    • 2
  1. 1.Computer Science and Engineering DepartmentJaume-I UniversityCastellónSpain
  2. 2.Department of Interaction ScienceSungkyunkwan UniversitySeoulSouth Korea

Personalised recommendations