Skip to main content
Log in

Improvements of detection accuracy and its confidence of defective areas by YOLOv2 using a data set augmentation method

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

Abstract

Recently, CNN (Convolutional Neural Network) and Grad-CAM (Gradient-weighted Class Activation Map) are being applied to various kinds of defect detection and position recognition for industrial products. However, in training process of a CNN model, a large amount of image data are required to acquire a desired generalization ability. In addition, it is not easy for Grad-CAM to clearly identify the defect area which is predicted as the basis of a classification result. Moreover, when they are deployed in an actual production line, two calculation processes for CNN and Grad-CAM have to be sequentially called for defect detection and position recognition, so that the processing time is concerned. In this paper, the authors try to apply YOLOv2 (You Only Look Once) to defect detection and its visualization to process them at once. In general, a YOLOv2 model can be built with less training images; however, a complicated labeling process is required to prepare ground truth data for training. A data set for training a YOLOv2 model has to be composed of image files and the corresponding ground truth data file named gTruth. The gTruth file has names of all the image files and their labeled information, such as label names and box dimensions. Therefore, YOLOv2 requires complex data set augmentation for not only images but also gTruth data. Actually, target products dealt with in this paper are produced with various kinds and small quantity, and also the frequency of occurrence of the defect is infrequent. Moreover, due to the fixed indoor production line, the valid image augmentation to be applied is limited to the horizontal flip. In this paper, a data set augmentation method is proposed to efficiently generate training data for YOLOv2 even in such a production situation and to consequently enhance the performance of defect detection and its visualization. The effectiveness is shown through experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. J Sci 313:504–507

    MathSciNet  MATH  Google Scholar 

  2. Saberironaghi A, Ren J, El-Gindy M (2023) Defect detection methods for industrial products using deep learning techniques: a review. Algorithms 16(2):95–124

    Article  Google Scholar 

  3. Loo MC, Logeswaran R, Salam ZAA (2023) CNN aided surface inspection for SMT manufacturing. In: Proceedings of 2023 15th International Conference on Developments in eSystems Engineering (DeSE), pp. 328–332

  4. Wu C, Zou X, Yu Z (2022) A detection method for wood surface defect based on feature fusion. In: Proceedings of 2022 4th International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 876–880

  5. Wu JY, Pang Y, Li X, Lu WF (2022) Abnormal wedge bond detection using convolutional autoencoders in industrial vision systems. In: Proceedings of 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), pp. 1–6

  6. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: Visual explanations from deep networks via gradient-based localization. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 618–626

  7. Arima K, Miki K, Nakashima K, Nagata F, Watanabe K (2020) Detection of defective wrap roll product using transfer learning of convolution neural networks-Design and evaluation of CNNs by transfer learning of InceptionV3. In: Proceedings of 21st SICE SI 2020, pp. 323–327 (in Japanese)

  8. Arima K, Nagata F, Shimizu T, Miki K, Kato H, Otsuka A, Watanabe K (2022) Visualization and location estimation of defective parts of industrial products using convolutional autoencoder. Artif Life Robot 27(4):804–811

    Article  Google Scholar 

  9. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You Only Look Once: Unified, real-time object detection. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788

  10. Redmon J, Farhadi A (2017) YOLO9000: Better, Faster, Stronger. In: Proceedings of The 2017 IEEE International Conference on Computer Vision and Pattern Recognition, pp. 6517–6525

  11. Zhang HW, Zhang LJ, Li PF, Gu D (2018) Yarn-dyed fabric defect detection with YOLOV2 based on deep convolution neural networks. In: Proceedings of 2018 IEEE 7th Data Driven Control and Learning Systems Conference (DDCLS), pp. 170–174

  12. Zongqi M (2018) Transmission line inspection image recognition technology based on YOLOv2 network. In: Proceedings of International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), pp. 421–428

  13. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Proceedings of International Conference on Learning Representations (ICLR2015), 14 pages, (https://doi.org/10.48550/arXiv.1409.1556)

  14. Nomura Y, Murao A, Sakaguchi K, Furuta K (2017) Crack detection system for concrete surface based on deep convolutional neural networks. J Japan Soc Civ Eng 73(2):189–198 (in Japanese)

    Google Scholar 

  15. Shigemura T, Nomura Y (2020) Two-step structure surface crack screening using object detection and recognition based on deep learning. J Soc Mater Sci Japan 69(3):218–225 (in Japanese)

    Article  Google Scholar 

  16. Redmon J, Farhadi A (2018) YOLOv3: an incremental improvement. Comput Sci 1804:02767

    Google Scholar 

  17. Arima K, Nagata F, Shimizu T, Otsuka A, Kato H, Watanabe K, Habib MK (2013) Improvements of detection accuracy and its confidence of defective areas by YOLOv2 using an image augmentation method. In: Proceedings of The 28th International Symposium on ARTIFICIAL LIFE AND ROBOTICS (AROB 28th 2023), pp. 1231–1235

  18. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778

  19. Dwibedi D, Misra I, Hebert M (2017) Cut, paste and learn: surprisingly easy synthesis for instance detection. In: Proceedings of the 2017 IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1301–1310

  20. Suzuki T , Nishio M (2019) A study on application of deep learning to determination of member damage in periodic bridge inspection. In: Proceedings of the Japan Society of Civil Engineers, 75(1): 48–59 (in Japanese)

  21. Goodfellow I, Pouget-Abadie J, Mirza M, Xu Xu, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM 63(11):139–144

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fusaomi Nagata.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was presented in part at the joint symposium of the 28th International Symposium on Artificial Life and Robotics, the 8th International Symposium on BioComplexity, and the 6th International Symposium on Swarm Behavior and Bio-Inspired Robotics (Beppu, Oita and Online, January 25-27, 2023).

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arima, K., Nagata, F., Shimizu, T. et al. Improvements of detection accuracy and its confidence of defective areas by YOLOv2 using a data set augmentation method. Artif Life Robotics 28, 625–631 (2023). https://doi.org/10.1007/s10015-023-00885-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10015-023-00885-9

Keywords

Navigation