Compliant Physical Interaction Based on External Vision-Force Control and Tactile-Force Combination

  • Mario Prats
  • Philippe Martinet
  • Sukhan Lee
  • Pedro J. Sanz
Chapter
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 35)

Abstract

This paper presents external vision-force control and force-tactile integration in three different examples of multisensor integration for robotic manipulation and execution of everyday tasks, based on a general framework that enables sensor-based compliant physical interaction of the robot with the environment. The first experiment is a door opening task where a mobile manipulator has to pull the handle with a parallel jaw gripper by using vision and force sensors in a novel external vision-force coupling approach, where the combination is done at the control level; the second one is another vision-force door opening task, but including a sliding mechanism and a different robot, endowed with a three-fingered hand; finally, the third task is to grasp a book from a bookshelf by means of tactile and force integration. The purpose of this paper is twofold: first, to show how vision and force modalities can be combined at the control level by means of an external force loop. And, second, to show how the sensor-based manipulation framework that has been adopted can be easily applied to very different physical interaction tasks in the real world, allowing for dependable and versatile manipulation.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bajcsy, R.: Integrating vision and touch for robotic applications. Trends and Applications of AI in Business (1984).Google Scholar
  2. 2.
    Mezouar, Y., Prats, M., Martinet, P.: External hybrid vision/force control. In: Intl. Conf. on Advanced Robotics (ICAR’07), pp. 170–175. Jeju, Korea (2007).Google Scholar
  3. 3.
    Perdereau, V., Drouin, M.: A new scheme for hybrid force-position control. Robotica 11, 453–464 (1993).CrossRefGoogle Scholar
  4. 4.
    Hillenbrand, U., Brunner, B., Borst, C., Hirzinger, G.: The robutler: a vision-controlled hand-arm system for manipulating bottles and glasses. In: 35th International Symposium on Robotics. Paris, France (2004).Google Scholar
  5. 5.
    Ott, C., Borst, C., Hillenbrand, U., Brunner, B., Bäuml, B., Hirzinger, G.: The robutler: Towards service robots for the human environment. In: Video Proc. Int. Conf. on Robotics and Automation. Barcelona, Spain (2005).Google Scholar
  6. 6.
    Petersson, L., Austin, D., Kragic, D.: High-level control of a mobile manipulator for door opening. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2333–2338, vol 3. Takamatsu, Japan (2000).Google Scholar
  7. 7.
    Prats, M., Sanz, P., del Pobil, A.: A framework for compliant physical interaction based on multisensor information. In: IEEE Int. Conf. on Multisensor Fusion and Integration for Intelligent Systems, pp. 439–444. Seoul, Korea (2008).Google Scholar
  8. 8.
    Bruyninckx, H., Schutter, J.D.: Specification of force-controlled actions in the ”task frame formalism„: A synthesis. IEEE Transactions on Robotics and Automation 12(5), 581–589 (1996).CrossRefGoogle Scholar
  9. 9.
    Mason, M.: Compliance and force control for computer-controlled manipulators. IEEE Transactions on Systems, Man, and Cybernetics 11(6), 418–432 (1981).CrossRefGoogle Scholar
  10. 10.
    Khalil, W., Dombre, E.: Modeling identification and control of robots. Hermes Penton Science (2002).Google Scholar
  11. 11.
    Lee, S., Lee, S., Lee, J., Moon, D., Kim, E., Seo, J.: Robust recognition and pose estimation of 3d objects based on evidence fusion in a sequence of images. In: IEEE Int. Conf. on Robotics and Automation, pp. 3773–3779. Rome, Italy (2007).Google Scholar
  12. 12.
    Marchand, E., Chaumette, F.: Virtual visual servoing: a framework for real-time augmented reality. In: EUROGRAPHICS 2002, vol. 21(3), pp. 289–298. Saarebrücken, Germany (2002).Google Scholar
  13. 13.
    Drummond, T., Cipolla, R.: Real-time visual tracking of complex structures. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7), 932–946 (2002).CrossRefGoogle Scholar
  14. 14.
    Comport, A.I., March, E., Chaumette, F.: Robust model-based tracking for robot vision. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS04, pp. 692–697 (2004).Google Scholar
  15. 15.
    Martinet, P., Gallice, J.: Position based visual servoing using a nonlinear approach. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, vol. 1, pp. 531–536. Kyongju, Korea (1999).Google Scholar
  16. 16.
    Prats, M., Martinet, P., del Pobil, A., Lee, S.: Vision/force control in task-oriented grasping and manipulation. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1320–1325. San Diego, USA (2007).Google Scholar
  17. 17.
    Prats, M., del Pobil, A., Sanz, P.: Task-oriented grasping using hand preshapes and task frames. In: IEEE Int. Conf. on Robotics and Automation, pp. 1794–1799. Rome, Italy (2007).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Mario Prats
    • 1
  • Philippe Martinet
    • 2
  • Sukhan Lee
    • 3
  • Pedro J. Sanz
    • 4
  1. 1.Robotic Intelligence Lab, Jaume-I UniversityCastellónSpain
  2. 2.LASMEA, Blaise Pascal University, Clermont-FerrandFrance
  3. 3.Intelligent Systems Research Center, Sungkyunkwan UniversityJangan-gu, SuwonSouth Korea
  4. 4.Robotic Intelligence Lab, Jaume-I UniversityCastellónSpain

Personalised recommendations