Advertisement

Automatic Segmentation Based on Deep Learning Techniques for Diabetic Foot Monitoring Through Multimodal Images

  • Abián HernándezEmail author
  • Natalia Arteaga-Marrero
  • Enrique Villa
  • Himar Fabelo
  • Gustavo M. Callicó
  • Juan Ruiz-Alzola
Conference paper
  • 522 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11752)

Abstract

Temperature data acquired by infrared sensors provide relevant information to assess different medical pathologies in early stages, when the symptoms of the diseases are not visible yet to the naked eye. Currently, a clinical system that exploits the use of multimodal images (visible, depth and thermal infrared) is being developed for diabetic foot monitoring. The workflow required to analyze these images starts with their acquisition and the automatic feet segmentation. A novel approach is presented for automatic feet segmentation using Deep Learning employing an architecture composed of an encoder and decoder (U-Net architecture) and applying a segmentation of planes in point cloud data, using the depth information of pixels labeled in the neural network prediction. The proposed automatic segmentation is a robust method for this case study, providing results in a short time and achieving better performance than other traditional segmentation methods as well as a basic U-Net segmentation system.

Keywords

RGB-D images Multimodal images Deep Learning Automatic segmentation 

References

  1. 1.
    International Diabetes Federation (IDF): Eighth edition 2017 (2017)Google Scholar
  2. 2.
    Lavery, L.A., et al.: Preventing diabetic foot ulcer recurrence in high-risk patients. Diab. Care 30, 14–20 (2007).  https://doi.org/10.2337/DC06-1600CrossRefGoogle Scholar
  3. 3.
    Hernandez-Contreras, D., Peregrina-Barreto, H., Rangel-Magdaleno, J., Gonzalez-Bernal, J.: Narrative review: diabetic foot and infrared thermography. Infrared Phys. Technol. 78, 105–117 (2016).  https://doi.org/10.1016/j.infrared.2016.07.013CrossRefGoogle Scholar
  4. 4.
    Liu, C., van Netten, J.J., van Baal, J.G., Bus, S.A., van der Heijden, F.: Automatic detection of diabetic foot complications with infrared thermography by asymmetric analysis. J. Biomed. Opt. 20, 026003 (2015).  https://doi.org/10.1117/1.jbo.20.2.026003CrossRefGoogle Scholar
  5. 5.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  6. 6.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: 2015 IEEE Conference on Computer Vision Pattern Recognition (2015).  https://doi.org/10.1109/CVPR.2015.7298965
  7. 7.
    Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Computer Society Conference Computer Vision Pattern Recognition, 07–12 June, pp. 1–9 (2015).  https://doi.org/10.1109/CVPR.2015.7298594
  8. 8.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision Pattern Recognition, pp. 770–778 (2015).  https://doi.org/10.1109/CVPR.2016.90
  9. 9.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: 2012 AlexNet. Adv. Neural Inf. Process. Syst. 1–9 (2012). http://dx.doi.org/10.1016/j.protcy.2014.09.007
  10. 10.
    Fedorov, A., et al.: 3D slicers as an image computing platform for the quantitative imaging network. Magn. Reson. Imaging 30, 1323–1341 (2012).  https://doi.org/10.1016/j.mri.2012.05.001.3DCrossRefGoogle Scholar
  11. 11.
    Lasso, A., Heffter, T., Rankin, A., Pinter, C., Ungi, T., Fichtinger, G.: PLUS: open-source toolkit for ultrasound-guided intervention systems. IEEE Trans. Biomed. Eng. 61, 2527–2537 (2014).  https://doi.org/10.1109/TBME.2014.2322864CrossRefGoogle Scholar
  12. 12.
    Scherer, D., Müller, A., Behnke, S.: Evaluation of pooling operations in convolutional architectures for object recognition. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds.) ICANN 2010. LNCS, vol. 6354, pp. 92–101. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15825-4_10CrossRefGoogle Scholar
  13. 13.
    Iglovikov, V., Shvets, A.: TernausNet: U-Net with VGG11 encoder pre-trained on ImageNet for image segmentation (2018)Google Scholar
  14. 14.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015).  https://doi.org/10.1007/s11263-015-0816-yMathSciNetCrossRefGoogle Scholar
  15. 15.
    Fischler, M.A., Fischler, M.A., Bolles, R.C., Bolles, R.C.: Random sample consensus. A paradigm for model fitting with applications to image analysis and automated cartography. Graph. Image Process. 24, 381–395 (1981)MathSciNetGoogle Scholar
  16. 16.
    Aldoma, A., et al.: Tutorial: point cloud library: three-dimensional object recognition and 6 DOF pose estimation. IEEE Robot. Autom. Mag. 19, 80–91 (2012).  https://doi.org/10.1109/MRA.2012.2206675CrossRefGoogle Scholar
  17. 17.
    Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybern. 9, 62–66 (1979).  https://doi.org/10.1109/TSMC.1979.4310076CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Research Institute in Biomedical and Health (iUIBS)University of Las Palmas de Gran CanariaLas Palmas de Gran CanariaSpain
  2. 2.Instituto de Astrofísica de Canarias (IAC)La LagunaSpain
  3. 3.Institute for Applied Microelectronics (IUMA)University of Las Palmas de Gran CanariaLas Palmas de Gran CanariaSpain

Personalised recommendations