Advertisement

Manipulator Point Teaching System Design Integrated with Image Processing and Iterative Learning Control

  • Yu-Shin Yang
  • Syh-Shiuh YehEmail author
Article
  • 7 Downloads

Abstract

This study proposes integrating a manipulator point teaching system with an image processing technique and the iterative learning control (ILC) method, which features multiple points teaching and positioning processes that are easily operated, rapid, and accurate. First, a teaching device is used to manipulate the manipulator, which brings the teaching target into the field of view of camera. The speed up robust feature (SURF) is then used to define the target feature. Then random sample consensus (RANSAC) is used to estimate the homography matrix in order to obtain the center position of the target feature. Subsequently, the manipulator position control command is calculated by referring to the center position. In terms of the computational method, this study uses the ILC method and refers to the manipulator position control command and moving error during iterative computation. Finally, the optimum position control command converges to the manipulator teaching point such that the manipulator can execute automatic and accurate continuous motion according to this teaching point. The method was applied to screw holes and the results show that the average convergence in positioning error is 70%, while the average final positioning error value is less than 15 μm. The experimental results show that the manipulator point teaching system proposed in this study is feasible.

Keywords

Image processing Iterative learning control Manipulators Point teaching 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

This research was supported in part by the Ministry of Science and Technology, Taiwan, R.O.C., under Contract MOST104-2221-E-027-132 and MOST103-2218-E-009-027-MY2. The authors would like to thank representatives from the Motorcon Robotic Company for their beneficial discussions with the MOST project team.

References

  1. 1.
    Chen, H., Li, J., Xing, G., Xing, J., Sun, H.: Trajectory tracking control of a macro-micro welding robot based on the vision navigation. In: 2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010, pp. 667–672. Tianjin (2010)Google Scholar
  2. 2.
    Jagersand, M., Nelson, R.: Visual space task specification, planning and control. In: Proceedings of the International Symposium on Computer Vision, ISCV’95, Piscataway, NJ, USA, pp. 521–526. IEEE (1995)Google Scholar
  3. 3.
    Maeda, Y., Moriyama, Y.: View-based teaching/playback for industrial manipulators. In: 2011 IEEE International Conference on Robotics and Automation, ICRA 2011, pp. 4306–4311. Shanghai (2011)Google Scholar
  4. 4.
    Maeda, Y., Saito, Y.: Lighting- and occlusion-robust view-based teaching/playback for model-free robot programming. In: Chen, W., Wang, H., Hosoda, K., Menegatti, E., Shimizu, M. (eds.) 14th International Conference on Intelligent Autonomous Systems, IAS 2016, vol. 531. pp. 939–952. Springer (2017)Google Scholar
  5. 5.
    Chan, A., Croft, E.A., Little, J.J.: Constrained manipulator visual servoing (CMVS): rapid robot programming in cluttered workspaces. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems: Celebrating 50 Years of Robotics, IROS’11, pp. 2825–2830. San Francisco, CA (2011)Google Scholar
  6. 6.
    Kang, S.B., Ikeuchi, K.: Toward automatic robot instruction from perception - Mapping human grasps to manipulator grasps. IEEE Trans. Rob. Autom. 13(1), 81–95 (1997).  https://doi.org/10.1109/70.554349 CrossRefGoogle Scholar
  7. 7.
    Zhang, X., Zhou, H., Cheng, H., Huang, Y.: Teaching-playback of robot manipulator based on human gesture recognition and motion tracking. In: IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015, pp. 1183–1188. Institute of Electrical and Electronics Engineers Inc. (2015)Google Scholar
  8. 8.
    Malheiros, P., Costa, P., Moreira, A.P., Ferreira, M.: Robust and real-time teaching of industrial robots for mass customisation manufacturing using stereoscopic vision. In: 35th Annual Conference of the IEEE Industrial Electronics Society, IECON 2009, Porto, pp. 2336–2341 (2009)Google Scholar
  9. 9.
    Gong, C., Yuan, J., Ni, J.: Nongeometric error identification and compensation for robotic system by inverse calibration. Int. J. Mach. Tools Manuf. 40(14), 2119–2137 (2000).  https://doi.org/10.1016/S0890-6955(00)00023-7 CrossRefGoogle Scholar
  10. 10.
    Lee, C. S. G., Ziegler, M.: Geometric approach in solving inverse kinematics of puma robots. IEEE Transactions on Aerospace and Electronic Systems AES-20 (6), pp. 695–706 (1984)Google Scholar
  11. 11.
    Nikolaev, D.A.: Analytic description of discrete dynamics of a robot-manipulator in an indefinite exterior medium by methods of idempotent mathematics. Autom. Remote. Control. 73(11), 1852–1864 (2012).  https://doi.org/10.1134/S0005117912110070 MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Borangiu, T., Dumitrache, A., Dogar, A.: Flexible 3D trajectory teaching and following for various robotic applications. In: Hashimoto, H., Kawasaki, H. (eds.) 9th IFAC Symposium on Robot Control, SYROCO 2009, pp. 299–304. IFAC Secretariat (2009)Google Scholar
  13. 13.
    Jiang, P., Bamforth, L.C.A., Feng, Z., Baruch, J.E.F., Chen, Y.Q.: Indirect iterative learning control for a discrete visual servo without a camera-robot model. IEEE Trans. Syst. Man. Cybern. Part B Cybern. 37(4), 863–876 (2007).  https://doi.org/10.1109/TSMCB.2007.895355 CrossRefGoogle Scholar
  14. 14.
    Jia, B., Liu, S., Liu, L.: Visual trajectory tracking of industrial manipulator with iterative learning control. Ind. Robot 42(1), 54–63 (2015).  https://doi.org/10.1108/IR-09-2014-0392 CrossRefGoogle Scholar
  15. 15.
    Gossow, D., Pellenz, J., Paulus, D.: Danger sign detection using color histograms and SURF matching. In: Proceedings of the 2008 IEEE International Workshop on Safety, Security and Rescue Robotics, SSRR 2008, pp. 13–18 (2008)Google Scholar
  16. 16.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 3951, pp. 404–417. LNCS (2006)Google Scholar
  17. 17.
    Song, K.T., Chang, C.H., Lin, C.H.: Robust feature extraction and control design for autonomous grasping and mobile manipulation. In: 2010 International Conference on System Science and Engineering, ICSSE 2010, pp. 445–450 (2010)Google Scholar
  18. 18.
    Mondragón, I.F., Campoy, P., Martínez, C., Olivares-Méndez, M.A.: 3D pose estimation based on planar object tracking for UAVs control. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 35–41 (2010)Google Scholar
  19. 19.
    Benhimane, S., Malis, E.: Homography-based 2D visual tracking and servoing. Int. J. Robot. Res. 26(7), 661–676 (2007).  https://doi.org/10.1177/0278364907080252 CrossRefGoogle Scholar
  20. 20.
    Owens, D.H., Daley, S.: Iterative learning control - monotonicity and optimization. Int. J. Appl. Math. Comput. Sci. 18(3), 279–293 (2008).  https://doi.org/10.2478/v10006-008-0026-7 MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Ahn, H.S., Chen, Y.Q., Moore, K.L.: Iterative learning control: Brief survey and categorization. IEEE Transactions on Systems. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37(6), 1099–1121 (2007).  https://doi.org/10.1109/TSMCC.2007.905759 CrossRefGoogle Scholar
  22. 22.
    Raguram, R., Chum, O., Pollefeys, M., Matas, J., Frahm, J.M.: USAC: A universal framework for random sample consensus. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 2022–2038 (2013).  https://doi.org/10.1109/TPAMI.2012.257 CrossRefGoogle Scholar
  23. 23.
    Peuwnuan, K., Woraratpanya, K., Pasupa, K.: Modified adaptive thresholding using integral image. In: 2016 13th International Joint Conference on Computer Science and Software Engineering, JCSSE 2016 (2016)Google Scholar
  24. 24.
    Derpanis, K.G., Leung, E.T.H., Sizintsev, M.: Fast scale-space feature representations by generalized integral images. In: Proceedings of the International Conference on Image Processing, ICIP, pp. IV521–IV524 (2007)Google Scholar
  25. 25.
    Brown, M., Lowe, D. G.: Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 74(1), 59–73 (2007).  https://doi.org/10.1007/s11263-006-0002-3 CrossRefGoogle Scholar
  26. 26.
    Uchiyama, M.: Formation of high speed motion pattern of mechanical arm by trial. Trans. Soc. Instrum. Control Eng. 14, 706–712 (1978)CrossRefGoogle Scholar
  27. 27.
    Kim, D.-I., Kim, S.: On iterative learning control algorithm for industrial robots and CNC machine tools. In: IECON Proceedings (Industrial Electronics Conference), pp. 601–606 (1993)Google Scholar
  28. 28.
    Arimoto, S., Kawamura, S., Miyazaki, F.: Bettering operation of robots by learning. J. Robot. Syst. 1 (2), 123–140 (1984).  https://doi.org/10.1002/rob.4620010203 CrossRefGoogle Scholar
  29. 29.
    Arimoto, S., Naniwa, T., Suzuki, H.: Robustness of P-type learning control with a forgetting factor for robotic motions. In: Proceedings of the IEEE Conference on Decision and Control, pp. 2640–2645 (1990)Google Scholar
  30. 30.
    Bien, Z., Huh, K. M.: Higher-order iterative learning control algorithm. IEEE Proc. D Control Theory Appl. 136(3), 105–112 (1989)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Institute Mechatronic EngineeringNational Taipei University of TechnologyTaipeiRepublic of China
  2. 2.Department Mechanical EngineeringNational Taipei University of TechnologyTaipeiRepublic of China

Personalised recommendations