Skip to main content
Log in

RETRACTED ARTICLE: Burn Image Recognition of Medical Images Based on Deep Learning: From CNNs to Advanced Networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

This article was retracted on 04 April 2024

This article has been updated

Abstract

Image recognition technology is one of the important research topics in the field of computer vision, which has been widely used in face recognition, aircraft recognition and unmanned driving. As an important research field of computer vision, image target recognition mainly uses the computer to extract the feature information of the target from the acquired image, transforms the content of the image into the feature expression that can be processed by the computer, and classifies the target objects in the image through the appropriate classification algorithm. Compared with traditional image recognition methods, deep learning can learn more complex knowledge. The excellent deep network model can extract the most useful information from the training data, play a good role in generalization, and has a stronger ability to predict the unknown data. For image classification and image recognition, convolutional neural network layer is used to extract image features. The complex network can make large-scale image classification possible. Combined with the specially designed network structure, the target in the image can be located. In this paper, a medical burn image recognition system is constructed by using convolutional neural network technology and deep learning. The proposed model has better robustness compared with existing algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Change history

References

  1. Junhai Z, Wenxiu Z, Xizhao W (2009) Research on image feature extraction. J Hebei Univ (Nat Sci Ed) 29(1):106–112

    Google Scholar 

  2. Jue W, Chunyi S (2003) Machine learning research. J Guangxi Norm Univ (Nat Sci Ed) 21(2):1–15

    Google Scholar 

  3. Li Q (2015) Research on image recognition based on depth confidence network and its application. Master's thesis of North China Electric Power University, Beijing, pp 7–33

  4. Jiayi Z (2010) The current situation and development trend of image recognition technology. Comput Knowl Technol 06(21):6045–6046

    Google Scholar 

  5. Everingham M, Van Gool L, Williams CKI, et al (2007) Introduction to PASCAL VOC 2007. In: Proceedings of the workshop on PASCAL visual object classes challenge, Rio de Janeiro, Brazil

  6. Burger W, Burge MJ (2016) Scale-invariant feature transform. Digital Image Processing, London

    Book  Google Scholar 

  7. Felzenszwalb P, Mcallester D, Ramanan D (2008) A discriminatively trained, multiscale, deformable part model. Cvpr 8:1–8

    Google Scholar 

  8. Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biol 5(4):115–133

    MathSciNet  Google Scholar 

  9. Lin YH, Huang CC, Wang SH (2011) Quantitative assessments of burn degree by high-frequency ultrasonic backscattering and statistical model. Phys Med Biol 56(3):757

    Article  Google Scholar 

  10. Pape SA, Skouras CA, Byrne PO (2001) An audit of the use of laser Doppler imaging (LDI) in the assessment of burns of intermediate depth. Burns 27(3):233

    Article  Google Scholar 

  11. Riordan CL et al (2003) Noncontact laser Doppler imaging in burn depth analysis of the extremities. J Burn Care Rehabil 24(4):177–186

    Article  Google Scholar 

  12. Karim AS, Shaum K, Gibson AL (2020) Indeterminate-depth burn injury—exploring the uncertainty. J Surg Res 245:183–197

  13. Xue T, Chen B, Wu J, Wei D, Freeman WT (2019) Video enhancement with task-oriented flow. Int J Comput Vis 127(8):1106–1125

    Article  Google Scholar 

  14. Chen Q, Zhang G, Yang X, Li S, Li Y, Wang HH (2018) Single image shadow detection and removal based on feature fusion and multiple dictionary learning. Multimed Tools Appl 77(14):18601–18624

    Article  Google Scholar 

  15. Liu Z (2020) Construction of urban agricultural health informatics safety supervision system based on imaging and deep learning. Concurrency and computation: practice and experience, p.e5834

  16. Wang H, Li Z, Li Y, Gupta BB, Choi C (2020) Visual saliency guided complex image retrieval. Pattern Recogn Lett 130:64–72

    Article  Google Scholar 

  17. Xu W, Zhang Q, Ma W, Wang E (2020) Response of two unequal-diameter flexible cylinders in a side-by-side arrangement: characteristics of FIV. China Ocean Eng 34(4):475–487

    Article  Google Scholar 

  18. Zhang S, Wang H, Huang W, You Z (2018) Plant diseased leaf segmentation and recognition by fusion of superpixel, K-means and PHOG. Optik 157:866–872

    Article  Google Scholar 

  19. Fazeli N, Oller M, Wu J, Wu Z, Tenenbaum JB, Rodriguez A (2019) See, feel, act: hierarchical learning for complex manipulation skills with multisensory fusion. Sci Robot 4(26):eaav3123

    Article  Google Scholar 

  20. Yildirim I, Wu J, Kanwisher N, Tenenbaum J (2019) An integrative computational architecture for object-driven cortex. Curr Opin Neurobiol 55:73–81

    Article  Google Scholar 

  21. Yiquan Wu, Song Yu, Huaichun Z (2013) Flame image state recognition based on gray entropy multi threshold segmentation and SVM. Chin J Electr Eng 33(20):66–73

    Google Scholar 

  22. Juyong C, Yundong C, Wenjie W (2015) Application of improved method based on watershed and Krawtchouk moment invariant in substation inspection image processing. Chin J Electr Eng 35(6):1329–1335

    Google Scholar 

  23. Zhao S (2006) Study on quadric surface fitting of standard space. Master's thesis of Harbin University of Technology, Harbin, pp 12–30

  24. Wanhai Xu, Li Y, Ma W, Liang K, Yang Yu (2020) Effects of spacing ratio on the FIV fatigue damage characteristic of a pair of tandem flexible cylinders. Appl Ocean Res 102:1–14

    Google Scholar 

  25. Smith K, Mei L, Yao S, Wu J, Spelke E, Tenenbaum J, Ullman T (2019) Modeling expectation violation in intuitive physics with coarse probabilistic object representations. In: Advances in neural information processing systems, pp 8985–8995

  26. Que S, Awuah-Offei K, Demirel A, Wang L, Demirel N, Chen Y (2019) Comparative study of factors affecting public acceptance of mining projects: evidence from USA, China and Turkey. J Clean Prod 237:117634

    Article  Google Scholar 

  27. Roux NL, Bengio Y (2008) Representational power of restricted Boltzmann machines and deep belief networks. Neural N Comput 20(6):1631–1649

    Article  MathSciNet  Google Scholar 

  28. Salakhutdinov R, Mnih A, Hinton G (2007) Restricted Boltzmann machines for collaborative filtering. In: International conference on machine learning, vol 277, pp 791–798

  29. Hinton GE, Sejnowski TJ (1986) Learning and relearning in Boltzmann machines. Parallel Distrib Process 1:282–317

    Google Scholar 

  30. Cao J, Wang J (2005) Global asymptotic and robust stability of recurrent neural networks with time delays. IEEE Trans 52(2):417–426

    MathSciNet  Google Scholar 

  31. Swain MJ, Ballard DH (1991) Color indexing. Int J Comput Vis 7(1):11–32

    Article  Google Scholar 

  32. Yang J, Zhu S (2012) An online image retrieval method based on self color correlogram model. J Comput Inf Syst 8(8):3369–3376

    Google Scholar 

  33. Guo JM, Prasetyo H, Su HS (2013) Image indexing using the color and bit pattern feature fusion. J Vis Commun Image Represent 24(8):1360–1379

    Article  Google Scholar 

  34. Smith JR, Chang SF (1996) Tools and techniques for color image retrieval, vol 2670, pp 426-437

  35. Zhihua Z (2016) Machine learning. Tsinghua University Press, Beijing, pp 1–18

    Google Scholar 

  36. Haykin S, Shen F, Xu Y, Zheng J et al (2011) Neural network and machine learning, 3rd edn. Mechanical Industry Press, Beijing, pp 1–25

    Google Scholar 

  37. Bindhu V (2019) Biomedical image analysis using semantic segmentation. J Innov Image Process (JIIP) 1(02):91–101

    Article  Google Scholar 

  38. Chandy A (2019) RGBD analysis for finding the different stages of maturity of fruits in farming. J Innov Image Process (JIIP) 1(02):111–121

    Google Scholar 

  39. Marble AE, Mastikhin IV, Colpitts BG et al (2006) A constant gradient unilateral magnet for near-surface MRI profiling. J Magn Reson 183(2):228–234

    Article  Google Scholar 

  40. Vegh V, Zhao H, Galloway GJ et al (2005) The design of planar gradient coils. Part I: a winding path correction method. Concepts Magn Reson Part B Magn Reson Eng 27(1):17–24

    Article  Google Scholar 

  41. Oligschläger D, Lehmkuhl S, Watzlaw J et al (2015) Miniaturized multi-coil arrays for functional planar imaging with a single-sided NMR sensor. J Magn Reson 254:10–18

    Article  Google Scholar 

  42. Lawrence BG, Crozier S, Yau DD et al (2002) A time-harmonic inverse methodology for the design of RF coils in MRI. IEEE Trans Biomed Eng 49(1):64–71

    Article  Google Scholar 

  43. Pei W, Shang W, Liang C, Jiang X, Huang C, Yong Q (2020) Using lignin as the precursor to synthesize Fe3O4@lignin composite for preparing electromagnetic wave absorbing lignin-phenol-formaldehyde adhesive. Ind Crops Prod 154(2020):112638

    Article  Google Scholar 

Download references

Acknowledgements

The research presented in this paper was supported by the Funds of Science & Technology Research of Guangdong Province (Grant: 2017A040403070); High-level Hospital Construction Research Project of Maoming People's Hospital; the industry-university-research project of Maoming City (2019).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinbo Huang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article has been retracted. Please see the retraction notice for more detail: https://doi.org/10.1007/s11063-024-11604-1

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, X., Chen, H., Wu, X. et al. RETRACTED ARTICLE: Burn Image Recognition of Medical Images Based on Deep Learning: From CNNs to Advanced Networks. Neural Process Lett 53, 2439–2456 (2021). https://doi.org/10.1007/s11063-021-10459-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-021-10459-0

Keywords

Navigation