Skip to main content
Log in

Multi - target objects and complex color recognition model based on humanoid robot

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This research mainly focuses on the humanoid robot’s recognition model for the multiple targets and complex color. The purpose of this research is to apply visual detection technology on a humanoid robot and propose a real time image extraction method to enhance the robot’s visual application like a human. The multi-target objects and complex color experiments for various geometric shapes were conducted using humanoid robot NAO. The main procedure of the humanoid robot performing the operation of the multiple targets and complex color is as follows: firstly, the image is acquired by the vision system mounted on the camera of the humanoid robot; secondly, the pixel position of the object point is obtained through the image processing; furthermore, the outer shape position is set for the humanoid robot and then the improvement is performed to let it become more efficiencies. These experimental results demonstrate it can successfully identify the target objects and their outer contour, and also through the learning function in the Choregraphe platform, it can learn more complex definitions on the outer contour such as bottle and book and their recognition. The superiority of the proposed work is that it can be for various complex conditions. In the three-dimensional environment, the visual system of the NAO robot was firstly used to perceive its surrounding environment, and then the image processing technology was used to identify the chess position. The first general remark is that this work can be conceived as a good technological/engineering achievement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  1. An der Wal E (2012) Object Grasping with the NAO. Groningen: University of Groningen.

  2. Azad P, Asfour T, Dillmann R (2006) Combining appearance-based and model based methods for real-time object recognition and 6d localization. IEEE/RSJ Int Conf Intell Robots Syst:5339–5344

  3. Buiget F, Homung A, Beimewitz M (2013) Whole-body motion planning for manipulation of articulated objects. Robotics and automation (ICRA), 2013 IEEE international conference on. IEEE:1656–1662

  4. Chen Y, Weihong Xu, Jingwen Zuo, Kai Yang (2019) The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier, Cluster Computing, Springer

  5. Chen Y, Jin Wang, Songjie Liu, Xi Chen, Kai Yang. Multiscale fast correlation filtering tracking algorithm based on a feature fusion model, CCPE (2019) online, Wiley

  6. Chen Y, Wang J, Xia R, Zhang Q, Cao Z, Yang K (2019) The visual object tracking algorithm research based on adaptive combination kernel. J Ambient Intell Humaniz Comput 10:4855–4867

    Article  Google Scholar 

  7. Chen Y, Xiong J, Xu W, Zuo J (2019) A novel online incremental and decremental learning algorithm based on variable support vector machine. Clust Comput 22:7435–7445

    Article  Google Scholar 

  8. Chen Y, Wang J, Chen X, Sangaiah AK, Yang K, Cao Z (2019) Image super-resolution algorithm based on dual-channel convolutional neural networks. Appl Sci 9(11):2316, 1–2316,16

    Google Scholar 

  9. Chen Y, Jiajun Tao, Linwu Liu, Jie Xiong, Runlong Xia, Jingbo Xie, Qian Zhang & Kai Yang (2020) Research of improving semantic image segmentation based on a feature fusion model, JAIHC,, Springer

  10. Chen Y, Jiajun Tao, Linwu Liu, Jie Xiong, Runlong Xia, Jingbo Xie, Qian Zhang & Kai Yang (2020) Saliency detection via improved hierarchical principle component analysis method, WCMC, Hindawi

  11. Dixit R, Naskar R, Mishra S (2017) Blur-invariant copy-move forgery detection technique with improved detection accuracy utilising SWT-SVD. 11(5):IET Image Process, 301–IET Image Proc309

  12. Fulop A-O et al. (2018) Lessons learned from lightweight CNN based object recognition for mobile robots. International Conference on Automation, Quality and Testing, Robotics (AQTR)

  13. Haffner O, Duchoň F (2014) Making a map for mobile robot using laser range finder. 23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD). IEEE:1–7

  14. Huang Y, Liu L, Shi C, Cai S, He B, Yu F, Wan Q, Song Y, Du S (2019) Analysis and FPGA realization of a novel 5D Hyperchaotic four-wing Memristive system, active control synchronization, and secure communication application. Complexity 12:1–18

    Google Scholar 

  15. Merritt Jenkins, George Kantor (2017) Online detection of occluded plant stalks for manipulation. 5162–5167.

  16. Jie C (2006) Three-dimensional reconstruction of multi-view geometry based on computer vision. People’s Liberation Army Information Engineering University

  17. Kim W (2017) Matthew Johnson-Roberson. A probabilistic framework for intrinsic image decomposition from RGB-D streams. 2017 IEEE/RSJ Int Conf Intell Robots Syst (IROS). 6498–6505.

  18. Koniar D, Hargaš L, Loncová Z, Duchoň F, Beňo P (2016) Machine vision application in animal trajectory tracking. Comput Methods Prog Biomed 127:258–272

    Article  Google Scholar 

  19. Koniar D, Hargaš L, Loncova Z, Simonová A, Duchoň F, Beňo P (2017) Visual system based object tracking using image segmentation for biomedical applications. Electr Eng 99(4):1349–1366

    Article  Google Scholar 

  20. Le P-H, J Košecka (2017) Dense piecewise planar RGB-D SLAM for indoor environments. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): 4944–4949.

  21. Li G-d, Guo-hui TIAN, Ying-Hua XUE (2010) Study on visual servo crawl operation of family service robot based on QRCode technology. J Southeast Univ: Nat Sci 40:30–36

    Google Scholar 

  22. Louloudi A, Mosallam A, Marturi N et al (2010) Integration of the humanoid robot Nao inside a smart home: a case study. Proceedings of the wedish AI society workshop (SAIS). Linkoping Electronic Conf Proceed 48:35–44

    Google Scholar 

  23. Luo Y, Qin J, Xiang X, Tan Y, Liu Q, Xiang L (2020) Coverless real-time image information hiding based on image block matching and dense convolutional network. J Real-Time Image Proc 17(1):125–135

    Article  Google Scholar 

  24. Mellmann H, Cotugno G (2011) Dynamic motion control: adaptive bimanual grasping for a humanoid robot. Fundamenta Informaticae 112(1):89–101

    Article  Google Scholar 

  25. Mohaimin SM, Saha SK, Khan AM, Arif ASM, Kanagasingam Y (2018) Automated method for the detection and segmentation of drusen in colour fundus image for the diagnosis of age-related macular degeneration. IET Image Process 12(6):919–927

    Article  Google Scholar 

  26. Moughlbay AA, Cervera E, Martinet P (2012) Error Catch Strategies for model based visual servo tasks: Application to autonomous object grasping with Nao robot. Control Automation Robotics & Vision (ICARCV), 2012 12th International Conference on IEEE: 1311–1316.

  27. Muller J, Frese U, Rofer T (2012) Grab a mug-object detection and securing motion planning with the Nao robot. Humanoid Robots (Humanoids), 2012 12th IEEE-RAS International Conference on IEEE: 349–356

  28. NAO-technical overview http://doc.aldebaran.com/2–1/family/robots/dimensions_robot.html/22/05/2016

  29. Okada K, Kojima M, Tokutsu S, Maki T, Mori Y, Inaba M (2007) Multi-cue 3D object recognition in knowledge-based vision-guided humanoid robot system. IEEE/RSJ Int Conf Intell Robots Syst:3217–3222

  30. Pengcheng Z (2015) Based on NAO robot visual target detection and tracking. North China Electric Power University

    Google Scholar 

  31. Porzi L, Buló SR, Penate-Sanchez A, Ricci E (2017) Learning Depth-Aware Deep Representations for Robotic Perception. IEEE Robotics Auto Lett 2(2):468–475

    Article  Google Scholar 

  32. Qiang Z(2012) Based on the visual information of mobile robot target recognition algorithm, Dissertation. Shandong University,

  33. Rehman Y, Khan JA, Shin H (2018) Efficient coarser-to-fine holistic traffic sign detection for occlusion handling. IET Image Process 12(12):2229–2237

    Article  Google Scholar 

  34. Song Y, Glasbey CA, Horgan GW, Polder G, Dieleman JA, van der Heijden GWAM (2014) Automatic fruit recognition and counting from multiple images. Biosyst Eng 118(1):203–215

    Article  Google Scholar 

  35. Sonka M, Hlavac V, Boyle R (2014) Image processing, analysis, and machine vision. Cengage Learning

  36. Tölgyessy M, Dekan M, Duchoň F, Rodina J, Hubinský P, Chovanec LU (2017) Foundations of visual linear human–robot interaction via pointing gesture navigation. Int J Soc Robot 9(4):509–523

    Article  Google Scholar 

  37. Wen X, Ling Z, Yunhua C, Qiumin JI (2019) Single-image super-resolution algorithm based on structural self-similarity and deformation block features. IEEE Access 7:58791–58801

    Article  Google Scholar 

  38. Wong SC et al (2017) Track everything: limiting prior knowledge in online multi-object recognition. IEEE Trans Image Process 26.10:4669–4683

    Article  MathSciNet  Google Scholar 

  39. Xu De, Tan Min, Li Yuan(2011) Robot visual measurement and control. National Defense Industry Press.

  40. Yu H (2012) Three-dimensional reconstruction based on multi-view geometry. Harbin, Harbin Institute of Technology

    Google Scholar 

  41. Yu F, Liu L, Xiao L, Li K, Cai S (2019) A robust and fixed-time zeroing neural dynamics for computing time-variant nonlinear equation using a novel nonlinear activation function. Neurocomputing 350(20):108–116

    Article  Google Scholar 

  42. Zhang Z (2000) A flexible new technique for camera calibration. Pattern Anal Mach Intell, IEEE Trans 22(11):1330–1334

    Article  Google Scholar 

  43. Zhang J, Xie Z, Sun J, Zou X, Wang J (2020) A cascaded R-CNN with multiscale attention and imbalanced samples for traffic sign detection. IEEE Access 8:29742–29754

    Article  Google Scholar 

  44. Zhao Y, M Carraro, M Munaro, E Menegatti (2017) Robust multiple object tracking in RGB-D camera networks, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 6625–6632

  45. Zhou P, Han X, Morariu VI, Davis LS (2018) Learning Rich Features for Image Manipulation Detection. 2018 IEEE/CVF Conf Comp Vision Patt Recog:1053–1061

Download references

Acknowledgement

The author deeply acknowledges Mr. Huang, Hu initial test support at first rough model.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li-Hong Juang.

Ethics declarations

Disclosure statementt

The authors declare that they have no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Juang, LH. Multi - target objects and complex color recognition model based on humanoid robot. Multimed Tools Appl 81, 9645–9669 (2022). https://doi.org/10.1007/s11042-022-11962-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-11962-9

Keywords

Navigation