Skip to main content

Advertisement

Log in

ResMIBCU-Net: an encoder–decoder network with residual blocks, modified inverted residual block, and bi-directional ConvLSTM for impacted tooth segmentation in panoramic X-ray images

  • Original Article
  • Published:
Oral Radiology Aims and scope Submit manuscript

Abstract

Objective

Impacted tooth is a common problem that can occur at any age, causing tooth decay, root resorption, and pain in the later stages. In recent years, major advances have been made in medical imaging segmentation using deep convolutional neural network-based networks. In this study, we report on the development of an artificial intelligence system for the automatic identification of impacted tooth from panoramic dental X-ray images.

Methods

Among existing networks, in medical imaging segmentation, U-Net architectures are widely implemented. In this article, for dental X-ray image segmentation, blocks and convolutional block structures using inverted residual blocks are upgraded by taking advantage of U-Net’s network capacity-intensive connections. At the same time, we propose a method for jumping connections in which bi-directional convolution long short-term memory is used instead of a simple connection. Assessment of the proposed artificial intelligence model performance was evaluated with accuracy, F1-score, intersection over union, and recall.

Results

In the proposed method, experimental results are obtained with 99.82% accuracy, 91.59% F1-score, 84.48% intersection over union, and 90.71% recall.

Conclusion

Our findings show that our artificial intelligence system could help with future diagnostic support in clinical practice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The dataset used in this study is available from the corresponding author upon request.

References

  1. Orhan K, Bilgir E, Bayrakdar IS, Ezhov M, Gusarev M, Shumilov E. Evaluation of artificial intelligence for detecting impacted third molars on cone-beam computed tomography scans. J Stomatol Oral Maxillofac Surg. 2021;122(4):333–7. https://doi.org/10.1016/j.jormas.2020.12.006.

    Article  PubMed  Google Scholar 

  2. Tajima S, Okamoto Y, Kobayashi T, Kiwaki M, Sonoda C, Tomie K, et al. Development of an automatic detection model using artificial intelligence for the detection of cyst-like radiolucent lesions of the jaws on panoramic radiographs with small training datasets. J Oral Maxillofac Surg Med Pathol. 2022;34(5):553–60. https://doi.org/10.1016/j.ajoms.2022.02.004.

    Article  Google Scholar 

  3. Faure J, Engelbrecht A. 2021. Impacted tooth detection in panoramic radiographs. In: International work-conference on artificial neural networks, vol 12861. Springer, Cham, pp 525–536.

  4. Padilla R, Netto SL, da Silva EAB. A survey on performance metrics for object-detection algorithms. In: 2020 ınternational conference on systems, signals and ımage processing (IWSSIP). 2020;237–242. https://doi.org/10.1109/IWSSIP48289.2020.9145130

  5. Geetha V, Aprameya KS, Hinduja DM. Dental caries diagnosis in digital radiographs using back-propagation neural network. Health Inform Sci Syst. 2020;8(1):1–14. https://doi.org/10.1007/s13755-019-0096-y.

    Article  Google Scholar 

  6. Cantu AG, Gehrung S, Krois J, Chaurasia A, Rossi JG, Gaudin R, et al. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J Dentistry. 2020;100:103425. https://doi.org/10.1016/j.jdent.2020.103425.

    Article  Google Scholar 

  7. Obuchowicz R, Nurzynska K, Obuchowicz B, Urbanik A, Piórkowski A. Caries detection enhancement using texture feature maps of intraoral radiographs. Oral Radiol. 2020;36(3):275–87. https://doi.org/10.1007/s11282-018-0354-8.

    Article  PubMed  Google Scholar 

  8. Imak A, Celebi A, Siddique K, Turkoglu M, Sengur A, Salam I. Dental caries detection using score-based multi-input deep convolutional neural network. IEEE Access. 2022;10:18320–9. https://doi.org/10.1109/ACCESS.2022.3150358.

    Article  Google Scholar 

  9. Lakshmi MM, Chitra P. 2020. Tooth decay prediction and classification from X-ray images using deep CNN. In: Proceedings of the 2020 ınternational conference on communication and signal processing (ICCSP), pp 1349–1355.

  10. Fukuda M, Inamoto K, Shibata N, Ariji Y, Yanashita Y, Kutsuna S, et al. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol. 2020;36(4):337–43. https://doi.org/10.1007/s11282-019-00409-x.

    Article  PubMed  Google Scholar 

  11. Murata M, Ariji Y, Ohashi Y, Kawai T, Fukuda M, Funakoshi T, et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol. 2019;35(3):301–7. https://doi.org/10.1007/s11282-018-0363-7.

    Article  PubMed  Google Scholar 

  12. Vinayahalingam S, Xi T, Bergé S, Maal T, De Jong G. Automated detection of third molars and mandibular nerve by deep learning. Sci Rep. 2019;9(1):1–7. https://doi.org/10.1038/s41598-019-45487-3.

    Article  Google Scholar 

  13. Vranckx M, Ockerman A, Coucke W, Claerhout E, Grommen B, Miclotte A, et al. Radiographic prediction of mandibular third molar eruption and mandibular canal involvement based on angulation. Orthod Craniofac Res. 2019;22(2):118–23. https://doi.org/10.1111/ocr.12297.

    Article  PubMed  Google Scholar 

  14. Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol. 2019;48(4):20180051. https://doi.org/10.1259/dmfr.20180051.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Imak A, Çelebi A, Türkoğlu M, Şengür A. Dental material detection based on faster regional convolutional neural networks and shape features. Neural Process Lett. 2022. https://doi.org/10.1007/s11063-021-10721-5.

    Article  Google Scholar 

  16. Chen H, Zhang K, Lyu P, Li H, Zhang L, Wu J, Lee CH. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep. 2019;9(1):1–11. https://doi.org/10.1038/s41598-019-40414-y.

    Article  Google Scholar 

  17. Banar N, Bertels J, Laurent F, Boedi RM, De Tobel J, Thevissen P, Vandermeulen D. Towards fully automated third molar development staging in panoramic radiographs. Int J Legal Med. 2020;134(5):1831–41. https://doi.org/10.1007/s00414-020-02283-3.

    Article  PubMed  Google Scholar 

  18. Ekert T, Krois J, Meinhold L, Elhennawy K, Emara R, Golla T, Schwendicke F. Deep learning for the radiographic detection of apical lesions. J Endodont. 2019;45(7):917–22. https://doi.org/10.1016/j.joen.2019.03.016.

    Article  Google Scholar 

  19. Kuwada C, Ariji Y, Fukuda M, Kise Y, Fujita H, Katsumata A, Ariji E. Deep learning systems for detecting and classifying the presence of impacted supernumerary teeth in the maxillary incisor region on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol. 2020;130(4):464–9. https://doi.org/10.1016/j.oooo.2020.04.813.

    Article  PubMed  Google Scholar 

  20. Zhang W, Li J, Li ZB, Li Z. Predicting postoperative facial swelling following impacted mandibular third molars extraction by using artificial neural networks evaluation. Sci Rep. 2018;8(1):1–9. https://doi.org/10.1038/s41598-018-29934-1.

    Article  Google Scholar 

  21. Başaran M, Çelik Ö, Bayrakdar IS, Bilgir E, Orhan K, Odabaş A, et al. Diagnostic charting of panoramic radiography using deep-learning artificial intelligence system. Oral Radiol. 2022;38(3):363–9. https://doi.org/10.1007/s11282-021-00572-0.

    Article  PubMed  Google Scholar 

  22. Celik ME. Deep learning based detection tool for impacted mandibular third molar teeth. Diagnostics. 2022;12(4):942. https://doi.org/10.3390/diagnostics12040942.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Mubashar M, Ali H, Grönlund C, Azmat S. R2U++: a multiscale recurrent residual U-Net with dense skip connections for medical image segmentation. Neural Comput Appl. 2022. https://doi.org/10.1007/s00521-022-07419-7.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. Proc Int Conf Med Image Comput Computer-Assisted Intervent. 2015. https://doi.org/10.48550/arXiv.1505.04597.

    Article  Google Scholar 

  25. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. Proc Int Conf Mach Learn. 2015;37:448–56. https://doi.org/10.48550/arXiv.1502.03167.

    Article  Google Scholar 

  26. Badshah N, Ahmad A. ResBCU-Net: deep learning approach for segmentation of skin images. Biomed Signal Process Control. 2022;71:103137. https://doi.org/10.1016/j.bspc.2021.103137.

    Article  Google Scholar 

  27. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. Mobilenetv 2: inverted residuals and linear bottlenecks. Proc IEEE Conf Comput Vision Pattern Recognit. 2018. https://doi.org/10.48550/arXiv.1801.04381.

    Article  Google Scholar 

  28. Le DN, Parvathy VS, Gupta D, Khanna A, Rodrigues JJ, Shankar K. IoT enabled depthwise separable convolution neural network with deep support vector machine for COVID-19 diagnosis and classification. Int J Mach Learn Cybernet. 2021. https://doi.org/10.1007/s13042-020-01248-7.

    Article  Google Scholar 

  29. Li Y, Zhang D, Lee DJ. IIRNet: a lightweight deep neural network using intensely inverted residuals for image recognition. Image Vision Comput. 2019;92:103819. https://doi.org/10.1016/j.imavis.2019.10.005.

    Article  Google Scholar 

  30. Boulila W, Ghandorh H, Khan MA, Ahmed F, Ahmad J. A novel CNN-LSTM-based approach to predict urban expansion. Ecol Inform. 2021. https://doi.org/10.1016/j.ecoinf.2021.101325.

    Article  Google Scholar 

  31. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–80. https://doi.org/10.1162/neco.1997.9.8.1735.

    Article  PubMed  Google Scholar 

  32. Huang G, Zhang Y, Ou J. Transfer remaining useful life estimation of bearing using depth-wise separable convolution recurrent network. Measurement. 2021;176:109090. https://doi.org/10.1016/j.measurement.2021.109090.

    Article  Google Scholar 

  33. Albumentations. https://albumentations.ai/. 2022

  34. Azad R, Asadi-Aghbolaghi M, Fathy M, Escalera S. Bi-directional ConvLSTM U-Net with densley connected convolutions. Proc IEEE/CVF Int Conf Comput Vision Workshops. 2019. https://doi.org/10.48550/arXiv.1909.00166.

    Article  Google Scholar 

  35. Jha D, Smedsrud PH, Riegler MA, Johansen D, De Lange T, Halvorsen P, Johansen HD. Resunet++: an advanced architecture for medical image segmentation. 2019 IEEE Int Symp Multimed (ISM). 2019. https://doi.org/10.1109/ISM46123.2019.00049.

    Article  Google Scholar 

Download references

Acknowledgements

Thanks are owed to the Small and Medium Enterprises Development Organization of Turkey (KOSGEB) for supporting the current study titled “Artificial intelligence-based expert system design in oral radiological imaging techniques”.

Funding

This study was funded by the Small and Medium Enterprises Development Organization of Turkey (KOSGEB) (R&D and Innovation Support Programme project number 62146).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andaç Imak.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethics approval

Prior to the study, an approval was obtained from Firat University Noninterventional Clinical Research Ethics Committee (Approval Date: December 31, 2020; No: 2020/17-15).

Informed consent

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Imak, A., Çelebi, A., Polat, O. et al. ResMIBCU-Net: an encoder–decoder network with residual blocks, modified inverted residual block, and bi-directional ConvLSTM for impacted tooth segmentation in panoramic X-ray images. Oral Radiol 39, 614–628 (2023). https://doi.org/10.1007/s11282-023-00677-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11282-023-00677-8

Keywords

Navigation