Skip to main content

Generating Synthetic Training Data for Assembly Processes

  • 2182 Accesses

Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT,volume 633)

Abstract

Current assembly assistance systems use different methods for object detection. Deep learning methods occur, but are not elaborated in depth. For those methods, great amounts of individual training data are essential. The use of 3D data to generate synthetic training data is obvious, since this data is usually available for assembly processes. However, to guide through the entire assembly process not only the individual parts are to be detected, but also all intermediate steps. We present a system that uses the assembly sequence and the STEP file of the assembly as input to automatically generate synthetic training data as input for a convolutional neural network to identify the entire assembly process. By means of experimental validation it can be demonstrated, that domain randomization improves the results and that the developed system outperforms state of the art synthetic training data.

Keywords

  • Object detection
  • Synthetic training data
  • Domain randomization
  • Assembly assistance systems
  • Assembly sequence

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-85910-7_13
  • Chapter length: 10 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   139.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-85910-7
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   179.99
Price excludes VAT (USA)
Hardcover Book
USD   179.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

References

  1. Bertram, P., Birtel, M., Quint, F., Ruskowski, M.: Intelligent manual working station through assistive systems. IFAC-PapersOnLine 51(11), 170–175 (2018). https://doi.org/10.1016/j.ifacol.2018.08.253. 16th IFAC Symposium on Information Control Problems in Manufacturing INCOM 2018

  2. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: optimal speed and accuracy of object detection (2020). https://arxiv.org/pdf/2004.10934

  3. Cartucho, J., Ventura, R., Veloso, M.: Robust object recognition through symbiotic deep learning in mobile robots. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2336–2341 (2018)

    Google Scholar 

  4. Chen, Y., Li, W., Sakaridis, C., Dai, D., Van Gool, L.: Domain adaptive faster R-CNN for object detection in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  5. Clark, A.: Pillow (PIL fork) documentation (2015). https://buildmedia.readthedocs.org/media/pdf/pillow/latest/pillow.pdf. Accessed 15 Mar 2021

  6. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)

    Google Scholar 

  7. DPNB: Broker für dynamische produktionsnetzwerke (2021). https://www.dpnb.de/. Accessed 19 Mar 2021

  8. Dümmel, J., Hochstein, M., Glöckle, J., Furmans, K.: Effizientes labeln von artikeln für das einlernen künstlicher neuronaler netze. In: Logistics Journal: Proceedings. Wissenschaftliche Gesellschaft für Technische Logistik (2019). https://doi.org/10.2195/lj_Proc_duemmel_de_201912_01

  9. Funk, M., Mayer, S., Schmidt, A.: Using in-situ projection to support cognitively impaired workers at the workplace. In: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, ASSETS 2015, New York, NY, USA, pp. 185–192. Association for Computing Machinery (2015). https://doi.org/10.1145/2700648.2809853

  10. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2012)

    Google Scholar 

  11. Georgakis, G., Mousavian, A., Berg, A.C., Kosecka, J.: Synthesizing training data for object detection in indoor scenes (2017). https://arxiv.org/pdf/1702.07836

  12. Guo, Y., j. zhang, Cai, J., Jiang, B., Zheng, J.: CNN-based real-time dense face reconstruction with inverse-rendered photo-realistic face images. IEEE Trans. Pattern Anal. Mach. Intell. 41(6), 1294–1307 (2019). https://doi.org/10.1109/TPAMI.2018.2837742

  13. Hinrichsen, S., Riediger, D., Unrau, A.: Development of a projection-based assistance system for maintaining injection molding tools. In: 2017 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), pp. 1571–1575. IEEE (2017)

    Google Scholar 

  14. Hinterstoisser, S., Pauly, O., Heibel, H., Martina, M., Bokeloh, M.: An annotation saved is an annotation earned: using fully synthetic training for object detection. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 2787–2796. IEEE (2019). https://doi.org/10.1109/ICCVW.2019.00340

  15. ISO 10303–21:2016: Industrial automation systems and integration – product data representation and exchange – part 21: Implementation methods: Clear text encoding of the exchange structure (2016)

    Google Scholar 

  16. König, M., et al.: MA 2 RA-manual assembly augmented reality assistant. In: 2019 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), pp. 501–505. IEEE (2019)

    Google Scholar 

  17. Kosch, T., Kettner, R., Funk, M., Schmidt, A.: Motioneap - ein system zur effizienzsteigerung und assistenz bei produktionsprozessen in unternehmen auf basis von bewegungserkennung und projektion (2016)

    Google Scholar 

  18. Kuznetsova, A., et al.: The open images dataset V4. Int. J. Comput. Vis. 128(7), 1956–1981 (2020). https://doi.org/10.1007/s11263-020-01316-z

    CrossRef  Google Scholar 

  19. Lin, T.Y., et al.: Microsoft COCO: common objects in context (2014). http://arxiv.org/pdf/1405.0312v3

  20. Lin, T.: Labelimg. Git code (2015). https://github.com/tzutalin/labelImg. Accessed 02 Aug 2020

  21. Mayershofer, C., Ge, T., Fottner, J.: Towards fully-synthetic training for industrial applications. In: 10th International Conference on Logistics, Informatics and Service Sciences (LISS) (2020)

    Google Scholar 

  22. Nowruzi, F.E., Kapoor, P., Kolhatkar, D., Hassanat, F.A., Laganiere, R., Rebut, J.: How much real data do we actually need: Analyzing object detection performance using synthetic and real data (2019)

    Google Scholar 

  23. Peng, X., Sun, B., Ali, K., Saenko, K.: Learning deep object detectors from 3D models (2014). https://arxiv.org/pdf/1412.7122

  24. Quint, F., Loch, F., Orfgen, M., Zuehlke, D.: A system architecture for assistance in manual tasks. In: The 12th International Conference on Intelligent Environments, pp. 43–52 (2016)

    Google Scholar 

  25. Rajpura, P.S., Hegde, R.S., Bojinov, H.: Object detection using deep CNNs trained on synthetic images. CoRR abs/1706.06782 (2017). http://arxiv.org/abs/1706.06782

  26. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  27. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. https://arxiv.org/pdf/1506.01497

  28. Rüther, S.: Assistive systems for quality assurance by context-aware user interfaces in health care and production. Ph.D. thesis, Universitätsbibliothek Bielefeld (2014)

    Google Scholar 

  29. Sarkar, K., Varanasi, K., Stricker, D.: Trained 3D models for CNN based object recognition. In: Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp. 130–137. SCITEPRESS - Science and Technology Publications (2017). https://doi.org/10.5220/0006272901300137

  30. Sorokin, A., Forsyth, D.: Utility data annotation with amazon mechanical Turk. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 51, pp. 1–8 (2008). https://doi.org/10.1109/CVPRW.2008.4562953

  31. Thamm, S., et al.: Concept for an augmented intelligence-based quality assurance of assembly tasks in global value networks. Procedia CIRP 97, 423–428 (2021). https://doi.org/10.1016/j.procir.2020.05.262. 8th CIRP Conference of Assembly Technology and Systems

  32. Zhou, X., Wang, D., Krähenbühl, P.: Objects as points (2019). https://arxiv.org/pdf/1904.07850

  33. Židek, K., Hosovsky, A., Piteľ, J., Bednár, S.: Recognition of assembly parts by convolutional neural networks. In: Hloch, S., Klichová, D., Krolczyk, G.M., Chattopadhyaya, S., Ruppenthalová, L. (eds.) Advances in Manufacturing Engineering and Materials. LNME, pp. 281–289. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-99353-9_30

    CrossRef  Google Scholar 

  34. Židek, K., Lazorík, P., Piteľ, J., Pavlenko, I., Hošovský, A.: Automated training of convolutional networks by virtual 3D models for parts recognition in assembly process. In: Trojanowska, J., Ciszak, O., Machado, J.M., Pavlenko, I. (eds.) MANUFACTURING 2019. LNME, pp. 287–297. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-18715-6_24

    CrossRef  Google Scholar 

Download references

Acknowledgements

This research has been funded by the German Federal Ministry of Education and Research (BMBF) under the program “Innovationen für die Produktion, Dienstleistung und Arbeit von morgen” and is supervised by Projektträger Karlsruhe (PTKA). The authors wish to acknowledge the funding agency and all the DPNB project partners for their contribution.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johannes Dümmel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Dümmel, J., Kostik, V., Oellerich, J. (2021). Generating Synthetic Training Data for Assembly Processes. In: Dolgui, A., Bernard, A., Lemoine, D., von Cieminski, G., Romero, D. (eds) Advances in Production Management Systems. Artificial Intelligence for Sustainable and Resilient Production Systems. APMS 2021. IFIP Advances in Information and Communication Technology, vol 633. Springer, Cham. https://doi.org/10.1007/978-3-030-85910-7_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85910-7_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85909-1

  • Online ISBN: 978-3-030-85910-7

  • eBook Packages: Computer ScienceComputer Science (R0)