Skip to main content
Log in

Development and validation of a real-time vision-based automatic HDMI wire-split inspection system

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

In the production process of HDMI cables, manual intervention is often required, resulting in low production efficiency and time-consuming. The paper presents a real-time vision-based automatic inspection system for HDMI cables to reduce the labor requirement in the production process. The system consists of hardware and software design. Since the wires in HDMI cables are tiny objects, the hardware design includes an image capture platform with a high-resolution camera and a ring light source to acquire high-resolution and high-quality images of the wires. The software design includes a data augmentation system and an automatic HDMI wire-split inspection system. The former aims to increase the number and diversity of training samples. The latter is designed to detect the coordinate position of the wire center and the corresponding Pin-ID (pid) number and output the results to the wire-bonding machine to perform subsequent tasks. In addition, a new HDMI cable dataset is created to train and evaluate a series of existing detection network models for this study. The experimental results show that the detection accuracy of the wire center using the existing YOLOv4 detector reaches 99.9%. Furthermore, the proposed system reduces the execution time by about 38.67% compared with the traditional manual wire-split inspection operation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

The authors declare that the data supporting the findings of this study are available within this article.

References

  1. Allied Market Research: HDMI cable market by type grade, and application: global opportunity analysis and industry forecast, 2019–2026. Research and Markets. https://www.researchandmarkets.com/reports/5031410/hdmi-cable-market-by-type-grade-and-application (2020). Accessed 2 April 2023

  2. Ghidoni, S., Finotto, M., Menegatti, E.: Automatic color inspection for colored wires in electric cables. IEEE Trans. Autom. Sci. Eng. 12, 596–607 (2015)

    Article  Google Scholar 

  3. Ning, J., Zhang, L., Zhang, D., Wu, C.: Interactive image segmentation by maximal similarity based region merging. Pattern Recogn. 43, 445–456 (2010)

    Article  Google Scholar 

  4. Xu, C., Li, Q., Zhou, Q., Zhang, S., Yu, D., Ma, Y.: Power line-guided automatic electric transmission line inspection system. IEEE Trans. Instrum. Meas. 71, 1–18 (2022)

    Google Scholar 

  5. Gua, J., Wang, Z., Kuen, J., Ma, L., Shahroud, A., Shuai, B., Liu, T., Wang, X., Wang, G., Cai, J., Chen, T.: Recent advances in convolutional neural networks. Pattern Recogn. 77, 354–377 (2018)

    Article  Google Scholar 

  6. Bhoite, A., Beke, N., Duffy, T., Moore, M., Torres, M.: Automated fiber optic cable endface field inspection technology. In 2011 IEEE AUTOTESTCON, pp. 226–234 (2011)

  7. Nguyen, V.N., Jenssen, R., Roverso, D.: Intelligent monitoring and inspection of power line components powered by UAVs and deep learning. IEEE Power Energy Technol. Syst. J. 6, 11–21 (2019)

    Article  Google Scholar 

  8. Xie, J., Sun, T., Zhang, J., Ye, L., Fan, M., Zhu, M.: Research on cable defect recognition technology based on image contour detection. In 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering, pp. 387–391 (2021)

  9. Sun, J., Yan, S., Song, X.: QCNet: query context network for salient object detection of automatic surface inspection. Vis. Comput. 39, 4391–4403 (2022). https://doi.org/10.1007/s00371-022-02597-w

    Article  Google Scholar 

  10. Wu, H., Li, B., Tian, L., Feng, J., Dong, C.: An adaptive loss weighting multi-task network with attention-guide proposal generation for small size defect inspection. Vis. Comput. 40, 681–698 (2023). https://doi.org/10.1007/s00371-023-02809-x

    Article  Google Scholar 

  11. Xi, Y., Zhou, K., Meng, L.-W., Chen, B., Chen, H.-M., Zhang, J.-Y.: Transmission line insulator defect detection based on swin transformer and context. Mach Intell. Res. 20, 729–740 (2023)

    Article  Google Scholar 

  12. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2017)

    Article  Google Scholar 

  13. Yao, D., Shao, Y.: A data efficient transformer based on swin transformer. Vis. Comput. (2023). https://doi.org/10.1007/s00371-023-02939-2

    Article  Google Scholar 

  14. Ai, L., Xie, Z., Yao, R., Yang, M.: MVTr: multi-feature voxel transformer for 3D object detection. Vis. Comput. 40, 1453–1466 (2024)

    Article  Google Scholar 

  15. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In 2017 IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

  16. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C., Berg, A.: SSD: Single Shot MultiBox Detector. Computer Vision and Pattern Recognition, arXiv:1512.02325v5, 1–17 (2016)

  17. Tian, Z., Shen, C., Chen, H., He, T.: FCOS: Fully convolutional one-stage object detection. In 2019 IEEE/CVF International Conference on Computer Vision, pp. 9626–9635 (2019)

  18. Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., Tian, Q.: CenterNet: keypoint triplets for object detection. In 2019 IEEE/CVF International Conference on Computer Vision, pp. 6568–6577 (2019)

  19. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

  20. Redmon, J., Farhadi, A.: YOLO9000: Better, Faster, Stronger. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 6517–6525 (2017)

  21. Redmon, J., Farhadi, A.: YOLOv3: An Incremental Improvement. Computer Vision and Pattern Recognition, arXiv:1804.02767v1, 1–6 (2018)

  22. Bochkovskiy, A., Wang, C., Liao, H.: YOLOv4: Optimal Speed and Accuracy of Object Detection. Computer Vision and Pattern Recognition, arXiv:2004.10934v1, 1–17 (2020)

  23. Ge, Z., Liu, S., Wang, F., Li, Z., Sun, J.: YOLOX: Exceeding YOLO Series in 2021. Computer Vision and Pattern Recognition, arXiv:2107.08430v2, 1–7 (2020)

  24. Jocher, G.: YOLOv5. Ultralytics. https://github.com/ultralytics/yolov5 (2020). Accessed 2 April 2023

  25. Li, C., Li, L., Jiang, H., Weng, K. Geng, Y., Li, L., Ke, Z., Li, Q., Cheng, M., Nie, W., Li, Y., Zhang, B., Liang, Y., Zhou, L. Xu, X., Chu, X., Wei, X., Wei, X.: YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. Computer Vision and Pattern Recognition, arXiv:2209.02976v1, 1–17 (2022)

  26. Wang, C.-T., Bochkovskiy, A., Liao, H.-Y. M.: YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. Computer Vision and Pattern Recognition, arXiv:2207.02696v1, 1–17 (2022)

  27. Jocher, G.: YOLOv8. Ultralytics. https://github.com/ultralytics/ultralytics (2023). Accessed 30 Nov 2023

  28. Kumar B., C., Punitha, R., Mohana: YOLOv3 and YOLOv4: Multiple object detection for surveillance applications. In 2020 Third International Conference on Smart Systems and Inventive Technology, pp. 1316–1321 (2020)

  29. Xie, H., Li, Y., Li, X., He, L.: A method for surface defect detection of printed circuit board based on improved YOLOv4. In 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering, pp. 851–857 (2021)

  30. Bian, Y., Fu, G., Hou, Q., Sun, B., Liao, G., Han, H.: Using improved YOLOv5s for defect detection of thermistor wire solder joints based on infrared thermography. In 2021 5th International Conference on Automation, Control and Robots, pp. 29–32 (2021)

  31. Roslan, M. I. B., Ibrahim, Z., Aziz, Z. A.: Real-time plastic surface defect detection using deep learning. In 2022 IEEE 12th Symposium on Computer Applications & Industrial Electronics, pp. 111–116 (2022)

  32. Wang, J., Tang, C., Li, J.: Towards real-time analysis of marine phytoplankton images sampled at high frame rate by a YOLOX-based object detection algorithm. In OCEANS 2022-Chennai, pp. 1–9 (2022)

  33. Shafi, O., Rai, C., Sen, R., Ananthanarayanan, G.: Demystifying TensorRT: characterizing neural network inference engine on nvidia edge devices. In 2021 IEEE International Symposium on Workload Characterization, pp. 226–237 (2021)

  34. Wang, C., Liao, H., Wu, Y., Chen, P., Hsieh, J., Yeh, I.: CSPNet: A new backbone that can enhance learning capability of CNN. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 1571–1580 (2020)

  35. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37, 1904–1916 (2015)

    Article  Google Scholar 

  36. Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8759–8768 (2018)

  37. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. Machine Learning, arXiv:1502.03167v3, 1–11 (2015)

  38. Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolutional network. Machine Learning, arXiv:1505.00853v2, 1–5 (2015)

  39. Zhang, Y., Han, J. H., Kwon, Y., Moon, Y.: A new architecture of feature pyramid network for object detection. In 2020 IEEE 6th International Conference on Computer and Communications, pp. 1224–1228 (2020)

Download references

Acknowledgements

This research was funded in part by JAWS CO., LTD. and the National Science and Technology Council of Taiwan under Grant NSTC 112-2221-E-032-036-MY2.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chi-Yi Tsai.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chiu, YC., Tsai, CY. & Chang, PH. Development and validation of a real-time vision-based automatic HDMI wire-split inspection system. Vis Comput (2024). https://doi.org/10.1007/s00371-024-03436-w

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-024-03436-w

Keywords

Navigation