Visual feedback control of a robot in an unknown environment (learning control using neural networks)

Original Article
  • 85 Downloads

Abstract

In this paper, a visual feedback control approach based on neural networks is presented for a robot with a camera installed on its end-effector to trace an object in an unknown environment. First, the one-to-one mapping relations between the image feature domain of the object to the joint angle domain of the robot are derived. Second, a method is proposed to generate a desired trajectory of the robot by measuring the image feature parameters of the object. Third, a multilayer neural network is used for off-line learning of the mapping relations so as to produce on-line the reference inputs for the robot. Fourth, a learning controller based on a multilayer neural network is designed for realizing the visual feedback control of the robot. Last, the effectiveness of the present approach is verified by tracing a curved line using a 6-degrees-of-freedom robot with a CCD camera installed on its end-effector. The present approach does not necessitate the tedious calibration of the CCD camera and the complicated coordinate transformations.

Keywords

Computer vision Image processing  Neural network Robot control Visual servoing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hosoda K, Asada M (1996) Adaptive visual servoing controller with feed-forward compensator without knowledge of true Jacobian. J Robot Soc Jpn (in Japanese) 14:313–319Google Scholar
  2. 2.
    Hosoda K, Igarashi K, Asada M (1997) Adaptive hybrid visual/force servoing control in unknown environment. J Robot Soc Jpn (in Japanese) 15:642–647Google Scholar
  3. 3.
    Weiss LE, Sanderson AC (1987) Dynamic sensor-based control of robots with visual feedback. IEEE J Robot Automat 3:404–417Google Scholar
  4. 4.
    Nikolaos P, Pradeep K (1993) Visual tracking of a moving target by a camera mounted on a robot: a combination of control and vision. IEEE J Robot Automat 9:14–35CrossRefGoogle Scholar
  5. 5.
    Bernardino A, Santos-Victor J (1999) Binocular tracking: integrating perception and control. IEEE J Robot Automat 15:1080–1093CrossRefGoogle Scholar
  6. 6.
    Malis E, Chaumette F, Boudet S (1999) 2-1/2-D visual servoing. IEEE J Robot Automat 15:238–250CrossRefGoogle Scholar
  7. 7.
    Hashimoto K, Ebine T, Kimura H (1996) Visual servoing with hand-eye manipulator-optimal control approach. IEEE J Robot Automat 12:766–774CrossRefGoogle Scholar
  8. 8.
    Wilson JW, Williams H, Bell GS (1996) Relative end-effector control using Cartesian position based visual servoing. IEEE J Transactions on Robotics and Automation 12:684–696CrossRefGoogle Scholar
  9. 9.
    Ishikawa J, Kosuge K, Furuta K (1990) Intelligent control of assembling robot using vision sensor. Proceedings 1990 IEEE International Conference on Robotics and Automation (cat. no. 90CH2876-1), vol 3, pp 1904–1909Google Scholar
  10. 10.
    Yamada T, Yabuta T (1991) Some remarks on characteristics of direct neuro-controller with regard to adaptive control. Trans Soc Inst Contr Eng (in Japanese). 27:784–791Google Scholar
  11. 11.
    Verma B (1997) Fast training of multilayer perceptrons. IEEE Trans Neural Netw 8:1314–1319CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2004

Authors and Affiliations

  1. 1.School of Computer Engineering and TechnologySouth China University of TechnologyGuangzhouP.R. China
  2. 2.School of Engineering and TechnologyDeakin UniversityGeelong VictoriaAustralia

Personalised recommendations