Skip to main content
Log in

Extraction of line objects from piping and instrumentation diagrams using an improved continuous line detection algorithm

  • Original Article
  • Published:
Journal of Mechanical Science and Technology Aims and scope Submit manuscript

Abstract

Digitizing image-format piping and instrumentation diagrams (P&IDs) consists of a step for detecting the information objects that constitute P&IDs, which identifies connection relationships between the detected objects, and a step for creating digital P&IDs. This paper presents a P&ID line object extraction method that uses an improved continuous line detection algorithm to extract the information objects that constitute P&IDs. The improved continuous line detection algorithm reduces the time spent performing line extraction by edge detection that employs a differential filter. It is also used to detect continuous lines in the vertical, horizontal, and diagonal directions. Additionally, it processes diagonal continuous lines after performing image differentiation to handle short continuous lines, which are a major cause of misdetection when detecting diagonal continuous lines. The P&ID line object extraction method that incorporates this algorithm consists of three steps. The preprocessing step removes the diagram’s outline borders and heading areas. Second, the detection step detects continuous lines and then detects the special signs that are needed to distinguish different types of lines. Third, the postprocessing step uses the detected line signs to identify detected continuous lines, which must be converted to other types of lines, and their types are changed. Finally, the lines and the flow arrow detection information are merged. To verify the proposed method, an image-format P&ID line extraction system prototype was implemented, and line extraction tests were conducted. In nine test P&IDs, the overall average precision and recall were 95.26 % and 91.25 %, respectively, demonstrating good line extraction performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J. Hwang, D. Mun and S. Han, Representation and propagation of engineering change information in collaborative product development using a neutral reference model, Concurrent Engineering, 17 (2) (2009) 147–157.

    Article  Google Scholar 

  2. D. Mun, J. Hwang, S. Han, H. Seki and J. Yang, Sharing product data of nuclear power plants across their lifecycles by utilizing a neutral model, Annals of Nuclear Energy, 35 (2) (2008) 175–186.

    Article  Google Scholar 

  3. Y. Moon, J. Lee, D. Mun and S. Lim, Deep learning-based method to recognize line objects and flow arrows from imageformat piping and instrumentation diagrams for digitization, Applied Sciences, 11 (21) (2021) 10054.

    Article  Google Scholar 

  4. A. Asokan, J. Anitha, M. Ciobanu, A. Gabor, A. Naaji and D. J. Hemanth, Image processing techniques for analysis of satellite images for historical maps classification—an overview, Applied Sciences, 10 (12) (2020) 4207.

    Article  Google Scholar 

  5. Y. Matsushita, D. T. Tran, H. Yamazoe and J.-H. Lee, Recent use of deep learning techniques in clinical applications based on gait: a survey, Journal of Computational Design and Engineering, 8 (6) (2021) 1499–1532.

    Article  Google Scholar 

  6. W. Zhao, R. Chellappa, P. J. Phillips and A. Rosenfeld, Face recognition: a literature survey, ACM Computing Surveys (CSUR), 35 (4) (2003) 399–458.

    Article  Google Scholar 

  7. E. N. Malamas, E. G. M. Petrakis, M. Zervakis, L. Petit and J.-D. Legat, A survey on industrial vision systems, applications and tools, Image and Vision Computing, 21 (2) (2003) 171–188.

    Article  Google Scholar 

  8. M. Zhang, Application of computer image processing in office automation system, Automatic Control and Computer Sciences, 50 (3) (2016) 179–186.

    Article  Google Scholar 

  9. A. Pandey, V. S. Panwar, M. E. Hasan and D. R. Parhi, V-REP-based navigation of automated wheeled robot between obstacles using PSO-tuned feedforward neural network, Journal of Computational Design and Engineering, 7 (4) (2020) 427–434.

    Article  Google Scholar 

  10. E. S. Gedraite and M. Hadad, Investigation on the effect of a Gaussian blur in image filtering and segmentation, Proceedings ELMAR-2011, Zadar (2011) 393–396.

  11. J. Canny, A computational approach to edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 6 (1986) 679–698.

    Article  Google Scholar 

  12. R. O. Duda and P. E. Hart, Use of the Hough transformation to detect lines and curves in pictures, Communications of the ACM, 15 (1) (1972) 11–15.

    Article  MATH  Google Scholar 

  13. P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus and Y. LeCun, Overfeat: integrated recognition, localization and detection using convolutional networks, arXiv:1312.6229 (2013).

  14. J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers and A. W. M. Smeulders, Selective search for object recognition, International Journal of Computer Vision, 104 (2) (2013) 154–171.

    Article  Google Scholar 

  15. R. Girshick, J. Donahue, T. Darrell and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014).

  16. R. Girshick, Fast R-CNN, Proceedings of the IEEE International Conference on Computer Vision (2015).

  17. S. Ren, K. He, R. Girshick and J. Sun, Faster R-CNN: towards real-time object detection with region proposal networks, Advances in Neural Information Processing Systems, 28 (2015).

  18. J. Redmon, S. Divvala, R. Girshick and A. Farhadi, You only look once: unified, real-time object detection, tiProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016).

  19. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu and A. C. Berg, SSD: single shot multibox detector, European Conference on Computer Vision, Springer (2016).

  20. T.-Y. Lin, P. Goyal, R. Girshick, K. He and P. Dollar, Focal loss for dense object detection, Proceedings of the IEEE International Conference on Computer Vision (2017).

  21. Q. Zhao, T. Sheng, Y. Wang, Z. Tang, Y. Chen, L. Cai and H. Ling, M2det: a single-shot object detector based on multi-level feature pyramid network, Proceedings of the AAAI Conference on Artificial Intelligence (2019).

  22. X. Li, W. Wang, L. Wu, S. Chen, X. Hu, J. Li, J. Tang and J. Yang, Generalized focal loss: learning qualified and distributed bounding boxes for dense object detection, Advances in Neural Information Processing Systems, 33 (2020) 21002–21012.

    Google Scholar 

  23. D. Zhang, L. He, M. Luo, Z. Xu and F. He, Weight asynchronous update: improving the diversity of filters in a deep convolutional network, Computational Visual Media, 6 (4) (2020) 455–466.

    Article  Google Scholar 

  24. T. Yu and H. Zhu, Hyper-parameter optimization: a review of algorithms and applications, arXiv:2003.05689 (2020).

  25. J. Bergstra and Y. Bengio, Random search for hyperparameter optimization, Journal of Machine Learning Research, 13 (10) (2012) 281–305.

    MathSciNet  MATH  Google Scholar 

  26. J. Mockus, Bayesian Approach to Global Optimization: Theory and Applications, Springer Science and Business Media, 37 (2012).

  27. T. He, Z. Zhang, H. Zhang, Z. Zhang, J. Xie and M. Li, Bag of tricks for image classification with convolutional neural networks, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019).

  28. V. S. Spelmen and R. Porkodi, A review on handling imbalanced data, 2018 International Conference on Current Trends towards Converging Technologies (ICCTCT), IEEE (2018).

  29. L. Fu and L. B. Kara, From engineering diagrams to engineering models: visual recognition and application, Computer-Aided Design, 43 (3) (2011) 278–292.

    Article  Google Scholar 

  30. R. Rahul, S. Paliwal, M. Sharma and L. Vig, Automatic information extraction from piping and instrumentation diagrams, arXiv:1901.11383 (2019).

  31. E.-S. Yu, J.-M. Cha, T. Lee, J. Kim and D. Mun, Features recognition from piping and instrumentation diagrams in image format using a deep learning network, Energies, 12 (23) (2019) 4425.

    Article  Google Scholar 

  32. D.-Y. Yun, S.-K. Seo, U. Zahid and C.-J. Lee, Deep neural network for automatic image recognition of engineering diagrams, Applied Sciences, 10 (11) (2020) 4005.

    Article  Google Scholar 

  33. H. Kim, W. Lee, M. Kim, Y. Moon, T. Lee, M. Cho and D. Mun, Deep-learning-based recognition of symbols and texts at an industrially applicable level from images of high-density piping and instrumentation diagrams, Expert Systems with Applications, 183 (2021) 115337.

    Article  Google Scholar 

  34. S.-O Kang, E.-B. Lee and H.-K. Baek, A digitization and conversion tool for imaged drawings to intelligent piping and instrumentation diagrams (P&ID), Energies, 12 (13) (2019) 2593.

    Article  Google Scholar 

  35. S. Paliwal, M. Sharma and L. Vig, OSSR-PID: one-shot symbol recognition in P&ID sheets using path sampling and GCN, 2021 International Joint Conference on Neural Networks (IJCNN) (2021).

  36. S. Paliwal, A. Jain, M. Sharma and L. Vig, Digitize-PID: automatic digitization of piping and instrumentation diagrams, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer (2021).

  37. M. Zlocha, Q. Dou and B. Glocker, Improving RetinaNet for CT lesion detection with dense masks from weak RECIST labels, International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer (2019).

Download references

Acknowledgments

This research was supported by the Basic Science Research Program (No. NRF-2022R1A2C2005879) through the National Research Foundation of Korea (NRF) funded by the Korean government (MSIT), by the Carbon Reduction Model Linked Digital Engineering Design Technology Development Program (No. RS-2022-00143813) funded by the Korean government (MOTIE), and by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00969 & 2022-0-00431).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Duhwan Mun.

Additional information

Yoochan Moon is the Master’s course of the School of Mechanical Engineering, Korea University, Seoul, Korea. He received his B.S. in Precision Mechanical Engineering from Kyungpook National University. His research interests include computer-aided design, engineering information recognition, deep learning, and knowledge-based engineering.

Seung-Tae Han is a Research Associate of the Mechanical Engineering, Korea University, Seoul, Korea. He received his B.S. in Mechanical Engineering from Dankook University. His research interests include computer-aided design, deep learning, and computer vision.

Jinwon Lee is a Professor in the Department of Industrial and Management Engineering, Gangneung-Wonju National University, Gangwon-do, Korea. He worked at Texas A&M University (college station, USA). He obtained his Ph.D. in Industrial Engineering in 2019 at Ajou University. His current research interests are neural networks, pattern recognition for industrial data, geometric modeling, and virtual reality for engineering applications.

Duhwan Mun is a Professor in the School of Mechanical Engineering at Korea University, Seoul, Korea. He obtained his Ph.D. in Mechanical Engineering in 2006 at Korea Advanced Institute of Science and Technology. His current research interests are computer-aided design, industrial data standards for product data exchange, product lifecycle management, knowledge-based engineering, and VR for engineering applications.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moon, Y., Han, ST., Lee, J. et al. Extraction of line objects from piping and instrumentation diagrams using an improved continuous line detection algorithm. J Mech Sci Technol 37, 1959–1972 (2023). https://doi.org/10.1007/s12206-023-0333-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12206-023-0333-9

Keywords

Navigation