Skip to main content
Log in

Target Reaching by Using Visual Information and Q-learning Controllers

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

This paper presents a solution to the problem of manipulation control: target identification and grasping. The proposed controller is designed for a real platform in combination with a monocular vision system. The objective of the controller is to learn an optimal policy to reach and to grasp a spherical object of known size, randomly placed in the environment. In order to accomplish this, the task has been treated as a reinforcement problem, in which the controller learns by a trial and error approach the situation-action mapping. The optimal policy is found by using the Q-Learning algorithm, a model free reinforcement learning technique, that rewards actions that move the arm closer to the target.

The vision system uses geometrical computation to simplify the segmentation of the moving target (a spherical object) and determines an estimate of the target parameters. To speed-up the learning time, the simulated knowledge has been ported on the real platform, an industrial robot manipulator PUMA 560. Experimental results demonstrate the effectiveness of the adaptive controller that does not require an explicit global target position using direct perception of the environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Arkin, R.C. 1998. Behavior-Based Robotics. MIT Press.

  • Asada, M., Noda, S., Tawaratsumida, S., and Hosoda, K. 1996. Purposive behavior acquisition for a real robot by vision-based rein-forcement learning. Machine Learning, 279.

  • Brooks, R.A. 1986. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 2(1): 14–23.

    Google Scholar 

  • De Giuseppe, R., Taurisano, F., Distante, C., and Anglani, A. 1999. Visual servoing of a robotic manipulator based on fuzzy logic control. In ICRA'99 International Conference on Robotics and Automation-Detroit MI.

  • Good, M.C. 1996. Dynamic effects in visual closed-loop systems. IEEE Transactions on Robotics and Automation, 12(5).

  • Grosso, E., Metta, G., Oddera, A., and Sandini, G. 1996. Robust visual servoing in 3D reaching tasks. IEEE Transactions on Robotics and Automation, 12(5).

  • Gullapalli, V. 1998. Stochastic real valued action. Technical report, Computer Science Department, University of Massachusetts.

  • Hutchinson, S., Hager, G.D., and Corke, P.I. 1996. A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12(5).

  • Kelly, R. 1996. Robust asymptotically stable visual servoing of pla-nar robots. IEEE Transactions on Robotics and Automation, 12(5).

  • Khadraoui, D., Motyl, G., Martinet, P., and Chaumette, J.G.F. 1996. Visual servoing in robotics scheme using a camera/laser-stripe sensor. IEEE Transactions on Robotics and Automation, 12(5).

  • Mahadevan, S. and Connell, J. 1992. Automatic programming of behavior-based robots using reinforcement learning. Artificial Intelligence, 311–365.

  • Nelson, B.J. and Khosla, P.K. 1996. Force and vision resolvability for assimilating disparate sensory feedback. IEEE Transactions on Robotics and Automation, 12(5).

  • Rizzi, A.A. and Koditschek, D.E. 1996. An active visual estimator for dexterous manipolation. IEEE Transactions on Robotics and Automation, 12(5).

  • Sutton, R.S. and Barto, A.G. 1998. Introduction to Reinforcement Learning, MIT Press.

  • Watkins, C.J. and Dayan, P. 1992. Technical note-Q-Learning. Machine Learning, 8(3/4): 323–339.

    Google Scholar 

  • Wilson, W.J., Hulls, C.C., and Bell, G.S. 1996. Relative end-effector control using cartesian position based visual servoing. IEEE Trans-actions on Robotics and Automation, 12(5).

  • Wong, A.K., Mayorga, R.V., Rong, L., and Liang, X. 1996. A vision based on-line motion planning of robot manipulators. In IROS 96-Intelligent Robots and Systems, Vol. 2.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Distante, C., Anglani, A. & Taurisano, F. Target Reaching by Using Visual Information and Q-learning Controllers. Autonomous Robots 9, 41–50 (2000). https://doi.org/10.1023/A:1008972101435

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1008972101435

Navigation