Skip to main content
Log in

A two-stage CNN-based hand-drawn electrical and electronic circuit component recognition system

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Though handwriting recognition is a well-explored research area for decades, there are a few sub-areas of this field that have still not obtained much attention from the researchers. Some examples include recognition of hand-drawn graphics components like circuit components and diagrams. Complete digitization of such handwritten documents is not possible without automatic conversion of the said circuit diagrams. Besides, to date, in most of the cases for commercial circuit design purposes, concerned people manually enter the components into the simulating software like Cadence, Spice to analyze the circuit and judge its performance. In this work, it has been tried to move one step towards automating this process by recognizing the hand-drawn circuit components which are considered as the most important step for this automation. The present endeavour is to design a two-stage convolutional neural network (CNN)-based model that recognizes the hand-drawn circuit components. In the first stage, all the similar-looking (i.e., similar shape and structure) circuit components are clustered into a single group using visual perception and input from confusion matrix of single-stage CNN-based classification, and in the later stage, the circuit components belong to the same group are classified into their actual classes. The proposed model has been evaluated on a self-made database where 20 different classes of hand-drawn circuit components are considered. The experimental outcome shows that the proposed two-stage classification model provides an accuracy 97.33% which is much higher than the single-stage method, which provides an accuracy of 86.00%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

References

  1. Majumder S, Ghosh S, Malakar S et al (2021) A voting-based technique for word spotting in handwritten document images. Multimed Tools Appl. https://doi.org/10.1007/s11042-020-10363-0

    Article  Google Scholar 

  2. Malakar S, Ghosh M, Sarkar R, Nasipuri M (2019) Development of a two-stage segmentation-based word searching method for handwritten document images. J IntellSyst 29:719–735

    Google Scholar 

  3. De Jesus EO, Lotufo RDA (1998) ECIR-an electronic circuit diagram image recognizer. In: Proceedings SIBGRAPI’98. International Symposium on Computer Graphics, Image Processing, and Vision (Cat. No. 98EX237). IEEE, pp 254–260

  4. Bhattacharya A, Roy S, Sarkar N, et al (2020) Circuit component detection in offline hand-drawn electrical/electronic circuit diagram. In: IEEE Calcutta Conference (CALCON 2020). IEEE, Kolkata

  5. Dewangan A, Dhole A (2018) KNN based hand drawn electrical circuit recognition. Int J Res ApplSciEngTechnol 6:1–6

    Google Scholar 

  6. Feng G, Viard-Gaudin C, Sun Z (2009) On-line hand-drawn electric circuit diagram recognition using 2D dynamic programming. Pattern Recognit 42:3215–3223

    Article  Google Scholar 

  7. Angadi M, Naika RL (2014) Handwritten circuit schematic detection and simulation using computer vision approach. Int J ComputSci Mob Comput 3:754–761

    Google Scholar 

  8. Wang L (2005) Support vector machines: theory and applications. Springer, Berlin

    Book  Google Scholar 

  9. Moetesum M, Younus SW, Warsi MA, Siddiqi I (2017) Segmentation and recognition of electronic components in hand-drawn circuit diagrams. EAI Endorsed Trans Scalable InfSyst 5:1–6

    Google Scholar 

  10. Hasan MA (2018) Introduction to digital image processing. Prentice-Hall Englewood Cliffs, Upper Saddle river

    Google Scholar 

  11. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. IEEE, pp 886–893

  12. Platt J (1998) Sequential minimal optimization: a fast algorithm for training support vector machines

  13. Veena E, LakshmanNaik R (2014) Hand written circuit schematic recognition. Sch J EngTechnol 2:681–684

    Google Scholar 

  14. Patare MD, Joshi MS (2016) Hand-drawn digital logic circuit component recognition using svm. Int J ComputAppl 143:24–28

    Google Scholar 

  15. Edwards B, Chandran V (2000) Machine recognition of hand-drawn circuit diagrams. In: 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 00CH37100). IEEE, pp 3618–3621

  16. Liu Y, Xiao Y (2013) Circuit sketch recognition. Dep Electr Eng Stanford Univ Stanford, CA

    Google Scholar 

  17. Dreijer JF (2006) Interactive recognition of hand-drawn circuit diagrams. University of Stellenbosch, Stellenbosch

    Google Scholar 

  18. LakshmanNaika R, Dinesh R, Prabhanjan S (2019) Handwritten electric circuit diagram recognition: an approach based on finite state machine. Int J Mach Learn Comput 9:374–380

    Article  Google Scholar 

  19. Rabbani M, Khoshkangini R, Nagendraswamy HS, Conti M (2016) Hand drawn optical circuit recognition. ProcediaComputSci 84:41–48

    Google Scholar 

  20. Roy S, Bhattacharya A, Sarkar N et al (2020) Offline hand-drawn circuit component recognition using texture and shape-based features. Multimed Tools Appl 79:1–21

    Article  Google Scholar 

  21. Sahoo S, Nandi SK, Barua S et al (2018) Handwritten Bangla word recognition using negative refraction based shape transformation. J Intell Fuzzy Syst. https://doi.org/10.3233/JIFS-169712

    Article  Google Scholar 

  22. Barua S, Malakar S, Bhowmik S et al (2017) Bangla handwritten city name recognition using gradient-based feature. 5th International Conference on Frontiers in Intelligent Computing: Theory and Applications. Springer, Singapore, pp 343–352

    Google Scholar 

  23. Deufemia V, Risi M, Tortora G (2014) Sketched symbol recognition using latent-dynamic conditional random fields and distance-based clustering. Pattern Recognit 47:1159–1171

    Article  Google Scholar 

  24. Avola D, Bernardi M, Cinque L, et al (2017) A machine learning approach for the online separation of handwriting from freehand drawing. In: International Conference on Image Analysis and Processing. Springer, pp 223–232

  25. Avola D, Bernardi M, Cinque L et al (2020) Online separation of handwriting from freehand drawing using extreme learning machines. Multimed Tools Appl 79:4463–4481

    Article  Google Scholar 

  26. Tivive FHC, Bouzerdoum A (2006) Texture classification using convolutional neural networks. In: TENCON 2006–2006 IEEE Region 10 Conference. IEEE, pp 1–4

  27. Gatys L, Ecker AS, Bethge M (2015) Texture synthesis using convolutional neural networks. In: Advances in neural information processing systems. pp 262–270

  28. Lin T-Y, Maji S (2016) Visualizing and understanding deep texture representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 2791–2799

  29. Andrearczyk V, Whelan PF (2016) Using filter banks in convolutional neural networks for texture classification. Pattern RecognitLett 84:63–69

    Article  Google Scholar 

  30. Geirhos R, Rubisch P, Michaelis C, et al (2019) ImageNet-trained CNNs are biased towards texture: increasing shape bias improves accuracy and robustness. In: International Conference on Learning Representations

  31. Liu L, Chen J, Fieguth P et al (2019) From BoW to CNN: two decades of texture representation for texture classification. Int J Comput Vis 127:74–109

    Article  Google Scholar 

  32. Malakar S, Sharma P, Singh PK et al (2017) A holistic approach for handwritten hindi word recognition. Int J Comput Vis Image Process 7:59–78. https://doi.org/10.4018/IJCVIP.2017010104

    Article  Google Scholar 

  33. Bhowmik S, Malakar S, Sarkar R et al (2019) Off-line Bangla handwritten word recognition: a holistic approach. Neural ComputAppl 31:5783–5798

    Article  Google Scholar 

  34. Kundu S, Paul S, Singh PK et al (2020) Understanding NFC-Net: a deep learning approach to word-level handwritten Indic script recognition. Neural ComputAppl 32:1–17

    Article  Google Scholar 

  35. Malakar S, Paul S, Kundu S et al (2020) Handwritten word recognition using lottery ticket hypothesis based pruned CNN model: a new benchmark on CMATERdb2. 1.2. Neural ComputAppl. https://doi.org/10.1007/s00521-020-04872-0

    Article  Google Scholar 

  36. Use of convolutional neural network for image classification. https://www.apsl.net/blog/2017/11/20/use-convolutional-neural-network-image-classification/#:~:text=CNNs derive their name from,small squares of input data. Accessed 6 Jun 2020

  37. Liu W, Wang Z, Liu X et al (2017) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26

    Article  Google Scholar 

  38. google colab. https://colab.research.google.com/. Accessed 6 Jun 2020

  39. Malakar S, Sarkar R, Basu S et al (2020) An image database of handwritten Bangla words with automatic benchmarking facilities for character segmentation algorithms. NEURAL Comput Appl 33:449–468

    Article  Google Scholar 

  40. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. pp 1097–1105

  41. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  42. Howard AG, Zhu M, Chen B, et al (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv Prepr arXiv170404861

Download references

Acknowledgement

We would like to thank the CMATER research laboratory of the Computer Science and Engineering Department, Jadavpur University, India, for providing us the infrastructural support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samir Malakar.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dey, M., Mia, S.M., Sarkar, N. et al. A two-stage CNN-based hand-drawn electrical and electronic circuit component recognition system. Neural Comput & Applic 33, 13367–13390 (2021). https://doi.org/10.1007/s00521-021-05964-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-05964-1

Keywords

Navigation