Skip to main content
Log in

A pose estimation system based on deep neural network and ICP registration for robotic spray painting application

  • ORIGINAL ARTICLE
  • Published:
The International Journal of Advanced Manufacturing Technology Aims and scope Submit manuscript

Abstract

Nowadays, off-line robot trajectory generation methods based on pre-scanned target model are highly desirable for robotic spray painting application. For actual implementation of the generated trajectory, the relative pose between the actual target and the model needs to be calibrated in the first place. However, obtaining this relative pose remains a challenge, especially from a safe distance in industrial setting. In this paper, a pose estimation system that is able to meet the robotic spray painting requirements is proposed to estimate the pose accurately. The system captures the image of the target using RGB-D vision sensor. The image is then segmented using a modified U-SegNet segmentation network and the resulting segmentation is registered with the pre-scanned model candidates using iterative closest point (ICP) registration to obtain the estimated pose. To strengthen the robustness, a deep convolutional neural network is proposed to determine the rough orientation of the target and guide the selection of model candidates accordingly thus preventing misalignment during registration. The experimental results are compared with relevant researches and validate the accuracy and effectiveness of the proposed system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Zhang BB, Wu J, Wang LP, Yu ZY, Fu P (2018) A method to realize accurate dynamic feedforward control of a spray-painting robot for airplane wings. Ieee-Asme T Mech 23(3):1182–1192

    Google Scholar 

  2. Ren SN, Xie Y, Yang XD, Xu J, Wang GL, Chen K (2017) A method for optimizing the base position of mobile painting manipulators. IEEE T Autom Sci Eng 14(1):370–375

    Article  Google Scholar 

  3. Trigatti G, Boscariol P, Scalera L, Pillan D, Gasparetto A (2018) A new path-constrained trajectory planning strategy for spray painting robots - rev.1. Int J Adv Manuf Tech 98(9–12):2287–2296

    Article  Google Scholar 

  4. Chen HP, Xi N (2008) Automated tool trajectory planning of industrial robots for painting composite surfaces. Int J Adv Manuf Tech 35(7–8):680–696

    Article  Google Scholar 

  5. Andulkar MV, Chiddarwar SS, Marathe AS (2015) Novel integrated offline trajectory generation approach for robot assisted spray painting operation. J Manuf Syst 37:201–216

    Article  Google Scholar 

  6. Wang G, Cheng J, Li R, Chen K (2015) A new point cloud slicing based path planning algorithm for robotic spray painting. In: IEEE international conference on robotics and biomimetics, pp 1717–1722

  7. Chen H, Fuhlbrigge T, Li X (2008) Automated industrial robot path planning for spray painting process: a review. In: IEEE international conference on automation science and engineering, pp 522–527

  8. Kharidege A, Ting D, Yajun Z (2017) A practical approach for automated polishing system of free-form surface path generation based on industrial arm robot. Int J Adv Manuf Tech 93(9–12):3921–3934

    Article  Google Scholar 

  9. Chen R, Wang GL, Zhao JG, Xu J, Chen K (2018) Fringe pattern based plane-to-plane visual servoing for robotic spray path planning. IEEE-Asme T Mech 23(3):1083–1091

    Article  Google Scholar 

  10. Xu Z, He W, Yuan K (2011) A real-time position and posture measurement device for painting robot. In: International conference on electric information and control engineering, pp 1942–194

  11. Lin CY, Abebe ZA, Chang SH (2015) Advanced spraying task strategy for bicycle-frame based on geometrical data of workpiece. In: International conference on advanced robotics, pp 277– 282

  12. Lin W, Anwar A, Li Z, Tong M, Qiu J, Gao H (2019) Recognition and pose estimation of auto parts for an autonomous spray painting robot. IEEE T Ind Inform 15(3):1709–1719

    Article  Google Scholar 

  13. Besl PJ, Mckay ND (1992) A method for registration of 3-D shapes. IEEE T Pattern Anal 14(2):239–256

    Article  Google Scholar 

  14. Hodan T, Zabulis X, Lourakis M, Obdrzalek S, Matas J (2015) Detection and fine 3D pose estimation of texture-less objects in RGB-D images. In: IEEE/RSJ international conference on intelligent robots and systems, pp 4421–4428

  15. Schwarz M, Schulz H, Behnke S (2015) RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features. In: 2015 IEEE international conference on robotics and automation, pp 1329–1335

  16. Xiang Y, Schmidt T, Narayanan V, Fox D (2018) PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes. arXiv:http://arXiv.org/abs/1711.00199

  17. Collet A, Martinez M, Srinivasa S (2011) The MOPED framework: object recognition and pose estimation for manipulation. I J Robot Res 30(10):1284–1306

    Article  Google Scholar 

  18. Zeng A, Yu K, Song S, Suo D, Walker E, Rodriguez A, Xiao J (2017) Multi-view self-supervised deep learning for 6D pose estimation in the Amazon picking challenge. In: 2017 IEEE international conference on robotics and automation, pp 1386–1383

  19. Shelhamer E, Long J, Darrell T (2017) Fully convolutional networks for semantic segmentation. IEEE T Pattern Anal 39(4):640–651

    Article  Google Scholar 

  20. Wong JM, Kee V, Le T, Wagner S, Mariottini GL, Schneider A, Hamilton L, Chipalkatty R, Hebert M, Johnson DMS (2017) SegICP: integrated deep semantic segmentation and pose estimation. In: 2017 IEEE/RSJ international conference on intelligent robots and systems, pp 5784–5789

  21. Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE T Pattern Anal 39(12):2481–2495

    Article  Google Scholar 

  22. Lin CM, Tsai CY, Lai YC, Li SA, Wong CC (2018) Visual object recognition and pose estimation based on a deep semantic segmentation network. IEEE Sens J 18(22):9370–9381

    Article  Google Scholar 

  23. Yang G, Wang S, Yang J, Shen B (2018) Active pose estimation of daily objects. In: 2018 IEEE international conference on mechatronics and automation, pp 837–842

  24. Kumar P, Nagar P, Arora C, Gupta A (2018) U-Segnet: fully convolutional neural network based automated brain tissue segmentation tool. In: 2018 25th IEEE international conference on image processing, pp 3503–3507

  25. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:http://arXiv.org/abs/1409.1556

  26. Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention – MICCAI, pp 234–241

  27. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: IEEE international conference on computer vision, pp 2980–2988

  28. Deng J, Dong W, Socher R, Li LJ, Li K, Li FF (2009) ImageNet: a large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition, pp 248–255

  29. Everingham M, Van Gool L, Williams CKI, Winn J, Zisserman A (2010) The pascal visual object classes (VOC) challenge. Int J Comput Vis 88(2):303–338

    Article  Google Scholar 

  30. Russell BC, Torralba A, Murphy KP, Freeman WT (2008) LabelMe: a database and web-based tool for image annotation. Int J Comput Vis 77(1–3):157–173

    Article  Google Scholar 

  31. Sutskever I, Martens J, Dahl G, Hinton G (2013) On the importance of initialization and momentum in deep learning. In: International conference on machine learning, pp 1139–1147

  32. Rusu RB, Cousins S (2011) 3D is here: point cloud library (PCL). In: 2011 IEEE international conference on robotics and automation, pp 1–4

Download references

Acknowledgements

The authors would like to express sincere gratitude to the reviewers and the editors for their valuable suggestions.

Funding

This work was supported by the National Natural Science Foundation of China under Grant Nos. U1813208 and 61573358.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fengshui Jing.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Z., Fan, J., Jing, F. et al. A pose estimation system based on deep neural network and ICP registration for robotic spray painting application. Int J Adv Manuf Technol 104, 285–299 (2019). https://doi.org/10.1007/s00170-019-03901-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00170-019-03901-0

Keywords

Navigation