Advertisement

A Multiresolution Convolutional Neural Network with Partial Label Training for Annotating Reflectance Confocal Microscopy Images of Skin

  • Alican Bozkurt
  • Kivanc Kose
  • Christi Alessi-Fox
  • Melissa Gill
  • Jennifer Dy
  • Dana Brooks
  • Milind Rajadhyaksha
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11071)

Abstract

We describe a new multiresolution “nested encoder-decoder” convolutional network architecture and use it to annotate morphological patterns in reflectance confocal microscopy (RCM) images of human skin for aiding cancer diagnosis. Skin cancers are the most common types of cancers, melanoma being the most deadly among them. RCM is an effective, non-invasive pre-screening tool for skin cancer diagnosis, with the required cellular resolution. However, images are complex, low-contrast, and highly variable, so that it requires months to years of expert-level training for clinicians to be able to make accurate assessments. In this paper we address classifying 4 key clinically important structural/textural patterns in RCM images. The occurrence and morphology of these patterns are used by clinicians for diagnosis of melanomas. The large size of RCM images, the large variance of pattern size, the large scale range over which patterns appear, the class imbalance in collected images, and the lack of fully-labelled images all make this a challenging problem to address, even with automated machine learning tools. We designed a novel nested U-net architecture to cope with these challenges, and a selective loss function to handle partial labeling. Trained and tested on 56 melanoma-suspicious, partially labelled, 12k \(\times \) 12k pixel images, our network automatically annotated RCM images for these diagnostic patterns with high sensitivity and specificity, providing consistent labels for unlabelled sections of the test images. We believe that providing such annotation in a fast manner will aid clinicians in achieving diagnostic accuracy, and perhaps more important, dramatically facilitate clinical training, thus enabling much more rapid adoption of RCM into widespread clinical use process. In addition our adaptation of U-net architecture provides an intrinsically multiresolution deep network that may be useful in other challenging biomedical image analysis applications.

Keywords

Reflectance confocal microscopy Melanoma Segmentation 

References

  1. 1.
    Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Patt. Anal. Mach. Intell. 39(12), 2481–2495 (2017)CrossRefGoogle Scholar
  2. 2.
    Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. arXiv preprint arXiv:1606.00915 (2016)
  3. 3.
    Chollet, F.: Keras (2015). https://github.com/fchollet/keras
  4. 4.
    Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945)CrossRefGoogle Scholar
  5. 5.
    Harrison, A.P., Xu, Z., George, K., Lu, L., Summers, R.M., Mollura, D.J.: Progressive and multi-path holistically nested neural networks for pathological lung segmentation from CT images. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 621–629. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_71CrossRefGoogle Scholar
  6. 6.
  7. 7.
    Kose, K., et al.: Deep learning based classification of morphological patterns in RCM to guide noninvasive diagnosis of melanocytic lesions. In: Photonics in Dermatology and Plastic Surgery, vol. 10037, p. 100370C. International Society for Optics and Photonics (2017)Google Scholar
  8. 8.
    Lee, C.-Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets. In: Artificial Intelligence and Statistics, pp. 562–570 (2015)Google Scholar
  9. 9.
    Lin, G., Milan, A., Shen, C., Reid, I.: RefineNet: multi-path refinement networks for high-resolution semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  10. 10.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  11. 11.
    Mo, J., Zhang, L.: Multi-level deep supervised networks for retinal vessel segmentation. Int. J. Comput. Assist. Radiol. Surgery 12(12), 2181–2193 (2017)CrossRefGoogle Scholar
  12. 12.
    Nikolaou, V., Stratigos, A.J.: Emerging trends in the epidemiology of melanoma. Br. J. Dermatol. 170(1), 11–19 (2014)CrossRefGoogle Scholar
  13. 13.
    Rajadhyaksha, M., Marghoob, A., Rossi, A., Halpern, A.C., Nehal, K.S.: Reflectance confocal microscopy of skin in vivo: from bench to bedside. Lasers Surg. Med. 49(1), 7–19 (2017)CrossRefGoogle Scholar
  14. 14.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  15. 15.
    Zhu, Q., Du, B., Turkbey, B., Choyke, P.L., Yan, P.: Deeply-supervised CNN for prostate segmentation. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 178–184. IEEE (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Alican Bozkurt
    • 1
  • Kivanc Kose
    • 2
  • Christi Alessi-Fox
    • 3
  • Melissa Gill
    • 4
    • 5
  • Jennifer Dy
    • 1
  • Dana Brooks
    • 1
  • Milind Rajadhyaksha
    • 2
  1. 1.Northeastern UniversityBostonUSA
  2. 2.Memorial Sloan Kettering Cancer CenterNew YorkUSA
  3. 3.Caliber I.D. IncRochesterUSA
  4. 4.SkinMedical Research and Diagnostics, P.L.L.C.Dobbs FerryUSA
  5. 5.Department of Pathology, SUNY Downstate Medical CenterBrooklynUSA

Personalised recommendations