Advertisement

Visual Servoing Control of Robot Manipulator

  • Chenguang YangEmail author
  • Hongbin MaEmail author
  • Mengyin Fu
Chapter

Abstract

Vision sensors are particularly useful for robots since they mimic our human sense of vision and allow for noncontact measurement of the environment. In this chapter, we first give a brief introduction of visual servoing and the applications of visual sensors. Then, a human–robot cooperation method is presented. The human operator is in charge of the main operation and robot autonomy is gradually added to support the execution of the operator’s intent. Next, a stereo camera-based tracking control is developed on a bimanual robot. Stereo imaging and range sensors are utilized to attain eye-to-hand and eye-in-hand servoing. A decision mechanism is proposed to divide the joint space for efficient operability of a two-arm manipulator.

Keywords

Application Program Interface Inverse Kinematic Kinematic Chain Iterative Close Point Visual Servoing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Corby Jr., N.R.: Machine vision for robotics. IEEE Trans. Ind. Electron. 3, 282–291 (1983)CrossRefGoogle Scholar
  2. 2.
    Hutchinson, S., Hager, G.D., Corke, P., et al.: A tutorial on visual servo control. IEEE Trans. Robot. Autom. 12(5), 651–670 (1996)CrossRefGoogle Scholar
  3. 3.
    Chang, W.-C., Shao, C.-K.: Hybrid eye-to-hand and eye-in-hand visual servoing for autonomous robotic manipulation. In: Proceedings of the 2010 SICE Annual Conference, pp. 415–422. IEEE, (2010)Google Scholar
  4. 4.
    Gao, C., Li, F., Xu, X.H.: A vision open-loop visual servoing. In: Proceedings of the 2006 International Conference on Machine Learning and Cybernetics, pp. 699–703. IEEE (2006)Google Scholar
  5. 5.
    Lee, M.-F.R., Chiu, F.H.S.: A hybrid visual servo control system for the autonomous mobile robot. In: Proceedings of the 2013 IEEE/SICE International Symposium on System Integration (SII), pp. 31–36. IEEE (2013)Google Scholar
  6. 6.
    Fabian, J., Young, T., Peyton Jones, J.C., Clayton, G.M.: Integrating the microsoft kinect with simulink: real-time object tracking example. IEEE/ASME Trans. Mechatron. 19(1), 249–257 (2014)CrossRefGoogle Scholar
  7. 7.
    Cruz, L., Lucio, D., Velho, L.: Kinect and rgbd images: Challenges and applications. In: Proceedings of the 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T), pp. 36–49. IEEE (2012)Google Scholar
  8. 8.
    Gonzalez, R.C.: Digital Image Processing. Pearson Education India (2009)Google Scholar
  9. 9.
    Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: Kinectfusion: real-time dense surface mapping and tracking. In: Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136. IEEE (2011)Google Scholar
  10. 10.
    Rusinkiewicz, S., Levoy, M.: Efficient variants of the icp algorithm. In: Proceedings of the 2001 Third International Conference on 3-D Digital Imaging and Modeling, pp. 145–152. IEEE (2001)Google Scholar
  11. 11.
    Pomerleau, F., Magnenat, S., Colas, F., Liu, M., Siegwart, R.: Tracking a depth camera: parameter exploration for fast icp. In: Proceedings of the 2011 International Conference on Intelligent Robots and Systems (IROS) IEEE/RSJ, pp. 3824–3829. IEEE (2011)Google Scholar
  12. 12.
  13. 13.
    Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., Moore, R.: Real-time human pose recognition in parts from single depth images. Commun. ACM 56(1), 116–124 (2013)CrossRefGoogle Scholar
  14. 14.
    Sinha, A., Chakravarty, K.: Pose based person identification using kinect. In: Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 497–503. IEEE (2013)Google Scholar
  15. 15.
    Ma, H., Wang, H., Fu, M., Yang, C.: One new human-robot cooperation method based on kinect sensor and visual-servoing. Intelligent Robotics and Applications, pp. 523–534. Springer, Cham (2015)Google Scholar
  16. 16.
    Sun, Z., He, D., Zhang, W.: A systematic approach to inverse kinematics of hybrid actuation robots. In: Proceedings of the 2012 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp. 300–305. IEEE (2012)Google Scholar
  17. 17.
    Hojaij, A., Zelek, J., Asmar, D.: A two phase rgb-d visual servoing controller. In: Proccedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 785–790. IEEE (2014)Google Scholar
  18. 18.
    Espiau, B., Chaumette, F., Rives, P.: A new approach to visual servoing in robotics. IEEE Trans. Robot. Autom. 8(3), 313–326 (1992)CrossRefGoogle Scholar
  19. 19.
    Mikolajczyk, K.K., Schmid, C.: Scale and affine invariant interest point detectors. Int. J. Comput. Vis. 60(1), 63–86 (2004)CrossRefGoogle Scholar
  20. 20.
    Yang, C., Amarjyoti, S., Wang, X., Li, Z., Ma, H., Su, C.-Y.: Visual servoing control of baxter robot arms with obstacle avoidance using kinematic redundancy. Intelligent Robotics and Applications, pp. 568–580. Springer, Cham (2015)CrossRefGoogle Scholar
  21. 21.
    Rastegar, J., Perel, D.: Generation of manipulator workspace boundary geometry using the monte carlo method and interactive computer graphics. J. Mech. Des. 112(3), 452–454 (1990)CrossRefGoogle Scholar
  22. 22.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)Google Scholar
  23. 23.
    Su, H., He, B.: A simple rectification method of stereo image pairs with calibrated cameras. In: Proceedings of the 2010 2nd International Conference on Information Engineering and Computer Science (ICIECS), pp. 1–4 (2010)Google Scholar
  24. 24.
    Bradski, G., Kaehler, A.: Learning OpenCV: Computer vision with the OpenCV library. O’Reilly Media, Inc., Sebastopol (2008)Google Scholar
  25. 25.
    Hong, L., Kaufman, A.: Accelerated ray-casting for curvilinear volumes. In: Proceedings of the Visualization’98, pp. 247–253. IEEE (1998)Google Scholar
  26. 26.
    Hirschmuller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, pp. 807–814 (2005)Google Scholar
  27. 27.
    Sciavicco, L., Siciliano, B.: Solving the inverse kinematic problem for robotic manipulators. RoManSy 6, pp. 107–114. Springer, Heidelberg (1987)CrossRefGoogle Scholar

Copyright information

© Science Press and Springer Science+Business Media Singapore 2016

Authors and Affiliations

  1. 1.Key Lab of Autonomous Systems and Networked Control, Ministry of EducationSouth China University of TechnologyGuangzhouChina
  2. 2.Centre for Robotics and Neural SystemsPlymouth UniversityDevonUK
  3. 3.School of AutomationBeijing Institute of TechnologyBeijingChina
  4. 4.State Key Lab of Intelligent Control and Decision of Complex SystemBeijing Institute of TechnologyBeijingChina

Personalised recommendations