Abstract
Image moment features can describe more general target patterns and have good decoupling properties. However, the image moment features that control the camera’s rotation motion around the x-axis and y-axis mainly depend on the target image itself. In this paper, the ultra-redundant manipulator visual positioning and robust tracking control method based on the image moments are advocated.First, six image moment features used to control camera motion around the x-axis and around the y-axis are proposed. And then, a novel method is proposed to use to select image features. For tracking a moving target, a kalman filter combined with adaptive fuzzy sliding mode control method is proposed to achieve tracking control of moving targets, which can estimate changes in image features caused by the target’s motion on-line and compensate for estimation errors. Finally, the experimental system based on Labview-RealTime system and ultra-redundant manipulator is used to verify the real-time performance and practicability of the algorithm. Experimental results are presented to illustrate the validity of the image features and tracking method.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Lin, W., Liu, C., Guo, H., Gao, H.: Hybrid Visual-Ranging Servoing for Positioning Based on Image and Measurement Features. IEEE Trans. Cybern. 53(7), 4270–4279 (2023). https://doi.org/10.1109/TCYB.2022.3160758
Ramón, J.L., Pomares, J., Felicetti, L.: Direct visual servoing and interaction control for a two-arms on-orbit servicing spacecraft. Acta Astronaut. 192, 368–378 (2022) https://doi.org/10.1016/j.actaastro.2021.12.045
Huang, Q., Zhou, J.: Li, Z: Review of robot-assisted medical ultrasound imaging systems: Technology and clinical applications. Neurocomputing 559, 126790 (2023). https://doi.org/10.1016/j.neucom.2023.126790
Zhang, J., Kang, N., Qu, Q., Zhou, L., Zhang, H.: Automatic fruit picking technology: a comprehensive review of research advances. Artif. Intell. Rev. 57(3), 54 (2024). https://doi.org/10.1007/s10462-023-10674-2
Machkour, Z., Ortiz-Arroyo, D., Durdevic, P.: Classical and Deep Learning based Visual Servoing Systems: a Survey on State of the Art. J. Intell. Robot. Syst. 104(1) (2022) https://doi.org/10.1007/s10846-021-01540-w
He, S., Xu, Y., Li, D., Xi, Y.: Eye-in-Hand Visual Servoing Control of Robot Manipulators Based on an Input Mapping Method. IEEE Trans. Control Syst. Technol. 31(1), 402–409 (2023). https://doi.org/10.1109/TCST.2022.3172571
Li, Y., Wang, H., Xie, Y., Cheah, C.C., Ren, W.: Adaptive Image-Space Regulation for Robotic Systems. IEEE Trans. Control Syst. Technol. 29(2), 850–857 (2021). https://doi.org/10.1109/TCST.2019.2930227
Mateus, A., Tahri, O., Aguiar, A.P., Lima, P.U., Miraldo, P.: On Incremental Structure from Motion Using Lines. IEEE Trans. Robot. 38(1), 391–406 (2022). https://doi.org/10.1109/TRO.2021.3085487
Yang, B., Lu, B., Chen, W., Zhong, F., Liu, Y.H.: Model-Free 3-D Shape Control of Deformable Objects Using Novel Features Based on Modal Analysis. IEEE Trans. Robot. 39(4), 3134–3153 (2023). https://doi.org/10.1109/TRO.2023.3269347
Yang, L., Yuan, C., Lai, G.: Adaptive fault-tolerant visual control of robot manipulators using an uncalibrated camera. Nonlinear Dyn. 111(4), 3379–3392 (2023). https://doi.org/10.1007/s11071-022-07996-1
Chaumette, F., Hutchinson, S.: Visual servo control. I. Basic approaches. IEEE Robot. Autom. Mag. 13(4), 82–90 (2006) https://doi.org/10.1109/MRA.2006.250573
Chaumette, F.: Potential problems of stability and convergence in image-based and position-based visual servoing. In: Kriegman, G.D., David J., and Hager, and Morse, A.S. (eds) The Confluence of Vision and Control, pp. 66–78. Springer, London (1998) https://doi.org/10.1007/BFb0109663
Chaumette, F.: Image moments: a general and useful set of features for visual servoing. IEEE Trans. Robot. 20(4), 713–723 (2004). https://doi.org/10.1109/TRO.2004.829463
Aspragkathos, S.N., Karras, G.C., Kyriakopoulos, K.J.: Event-Triggered Image Moments Predictive Control for Tracking Evolving Features Using UAVs. IEEE Robot. Autom. Lett. 9(2), 1019–1026 (2024). https://doi.org/10.1109/LRA.2023.3339064
Zhou, Y., Zhang, Y., Gao, J., An, X.: Visual Servo Control of Underwater Vehicles Based on Image Moments. In: 2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), pp. 899–904 (2021) https://doi.org/10.1109/ICARM52023.2021.9536071
Tahri, O., Tamtsia, A.Y., Mezouar, Y., Demonceaux, C.: Visual Servoing Based on Shifted Moments. IEEE Trans. Robot. 31(3), 798–804 (2015). https://doi.org/10.1109/TRO.2015.2412771
Hu, M.K.: Visual pattern recognition by moment invariants. IEEE Trans. Inf. Theory. 8(2), 179–187 (1962). https://doi.org/10.1109/TIT.1962.1057692
Younsi, M., Diaf, M., Siarry, P.: Comparative study of orthogonal moments for human postures recognition. Eng. Appl. Artif. Intell. 120, 105855 (2023). https://doi.org/10.1016/j.engappai.2023.105855
Tamtsia, A.Y.: Nouvelles contributions à l’application des moments en asservissement visuel. PhD thesis, Université Blaise Pascal-Clermont-Ferrand II (2013)
Tahri, O., Chaumette, F.: Point-based and region-based image moments for visual servoing of planar objects. IEEE Trans. Robot. 21(6), 1116–1127 (2005). https://doi.org/10.1109/TRO.2005.853500
Sato, J., Cipolla, R.: Extracting Group Transformations from Image Moments. Comput. Vis. Image Underst. 73(1), 29–42 (1999). https://doi.org/10.1006/cviu.1998.0702
Van Gool, L., Moons, T., Pauwels, E., Oosterlinck, A.: Vision and Lie’s approach to invariance. Image Vis. Comput. 13(4), 259–277 (1995). https://doi.org/10.1016/0262-8856(95)99715-D
Chen, Q., Zhang, Q., Gao, Q., Feng, Z., Tang, Q., Zhang, G.: Design and optimization of a space net capture system based on a multi-objective evolutionary algorithm. Acta Astronautica 167, 286–295 (2020). https://doi.org/10.1016/j.actaastro.2019.11.003
Zhang, Y., Li, P., Quan, J., Li, L., Zhang, G., Zhou, D.: Progress, Challenges, and Prospects of Soft Robotics for Space Applications. Adv. Intell. Syst. 5(3), 2200071 (2023). https://doi.org/10.1002/aisy.202200071
Lin, J., Wang, Y., Miao, Z., Wang, H.: Fierro, R: Robust Image-Based Landing Control of a Quadrotor on an Unpredictable Moving Vehicle Using Circle Features. IEEE Trans. Autom. Sci. Eng. 20(2), 1429–1440 (2023). https://doi.org/10.1109/TASE.2022.3180506
Zhao, W., Liu, H., Wang, X.: Robust visual servoing control for quadrotors landing on a moving target. J. Frank. Inst. 358(4), 2301–2319 (2021). https://doi.org/10.1016/j.jfranklin.2021.01.008
Cong, V.D.: Visual servoing control of 4-DOF palletizing robotic arm for vision based sorting robot system. Int. J. Interact. Des. Manuf. 17(2), 717–728 (2023). https://doi.org/10.1007/s12008-022-01077-8
Janabi-Sharifi, F., Marey, M.: A Kalman-Filter-Based Method for Pose Estimation in Visual Servoing. IEEE Trans. Robot. 26(5), 939–947 (2010). https://doi.org/10.1109/TRO.2010.2061290
Zhu, N., Xie, W.F., Shen, H.: Position-based visual servoing of a 6-RSS parallel robot using adaptive sliding mode control. ISA Trans. 144, 398–408 (2024). https://doi.org/10.1016/j.isatra.2023.10.029
Park, T.H., D’Amico, S.: Adaptive Neural-Network-Based Unscented Kalman Filter for Robust Pose Tracking of Noncooperative Spacecraft. J. Guid. Control Dyn. 46(9), 1671–1688 (2023) https://doi.org/10.2514/1.G007387
Liu, A., Lai, G., Xiao, H., Liu, Z., Zhang, Y., Chen, C.L.P.: Resilient adaptive trajectory tracking control for uncalibrated visual servoing systems with unknown actuator failures. J. Frank. Inst. 361(1), 526–542 (2024). https://doi.org/10.1016/j.jfranklin.2023.12.011
Li, T., Yu, J., Qiu, Q., Zhao, C.: Hybrid Uncalibrated Visual Servoing Control of Harvesting Robots With RGB-D Cameras. IEEE Trans. Ind. Electron. 70(3), 2729–2738 (2023). https://doi.org/10.1109/TIE.2022.3172778
Bensalah, F., Chaumette, F.: Compensation of abrupt motion changes in target tracking by visual servoing. In: Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol.1, pp. 181–187 (1995) https://doi.org/10.1109/IROS.1995.525794
Cretual, A., Chaumette, F.: Application of motion-based visual servoing to target tracking. Int. J. Robot. Res. 20(11), 878–890 (2001). https://doi.org/10.1177/02783640122068164
Flusser, J., Zitova, B., Suk, T.: Moment invariants to translation, rotation and scaling, 13-47 (2009) https://doi.org/10.1002/9780470684757
Tahri, O., Chaumette, F.: Complex Objects Pose Estimation based on Image Moment Invariants. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 436-441 (2005) https://doi.org/10.1109/ROBOT.2005.1570157
Li, Z., Guo, C.: Research on Key Technologies of target location based on Intelligent Robot. In: 2022 3rd International Conference on Computer Vision, Image and Deep Learning & International Conference on Computer Engineering and Applications (CVIDL & ICCEA), pp. 494-498 (2022) https://doi.org/10.1109/CVIDLICCEA56201.2022.9825288
Mohebbi, A., Keshmiri, M., Xie, W.F.: An eye-in-hand stereo visual servoing for tracking and catching moving objects. In: Proceedings of the 33rd Chinese Control Conference, pp. 8570–8577 (2014) https://doi.org/10.1109/ChiCC.2014.6896439
Zhang, S., Chen, J., Bai, C., Li, J.: Global iterative learning control based on fuzzy systems for nonlinear multi-agent systems with unknown dynamics. Inf. Sci. 587, 556–571 (2022). https://doi.org/10.1016/j.ins.2021.12.027
Steger, C.: On the Calculation of Arbitrary Moments of Polygons. (1996) https://api.semanticscholar.org/CorpusID:17506973
Funding
This research is supported by the National Natural Science Foundation of China (Grant No. 62173047); and the National Natural Science Foundation of China (Grant No. 62235018).
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by ZhongcanLi. The design of experiments were performed by ZhongcanLi and YufeiZhou. The first draft of the manuscript was written by ZhongcanLi and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Appendix A
Appendix A
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Li, Z., Zhou, Y., Zhu, M. et al. Image moment-based visual positioning and robust tracking control of ultra-redundant manipulator. J Intell Robot Syst 110, 83 (2024). https://doi.org/10.1007/s10846-024-02103-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-024-02103-5