Abstract

Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation framework via a very deep residual neural network based on dermoscopic images. The deep residual neural network and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the chained residual pooling is leveraged to capture diverse appearance features based on the context. Finally, we apply the conditional random field (CRF) to smooth segmentation maps. Our proposed method shows superiority over state-of-the-art approaches based on the public skin lesion challenge dataset.

Keywords

Dermoscopy image Skin lesion segmentation Deep residual network Conditional random field Deep RefineNet 

References

  1. 1.
    Siegel, R.L., Miller, K.D., Jemal, A.: Cancer Statistics, 2017. CA-Cancer J. Clin. 67, 7–30 (2017)CrossRefGoogle Scholar
  2. 2.
    Gaudy-Marqueste, C., Wazaefi, Y., Bruneu, Y., Triller, R., Thomas, L., Pellacani, G., Malvehy, J., Avril, M.F., Monestier, S., Richard, M.A., Fertil, B., Grob, J.J.: Ugly duckling sign as a major factor of efficiency in melanoma detection. JAMA Dermatol. 153, 279–284 (2017)CrossRefGoogle Scholar
  3. 3.
    Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: NIPS, pp. 2366–2374 (2014) Google Scholar
  4. 4.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)Google Scholar
  5. 5.
    Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with Gaussian edge potentials. In: NIPS, pp. 109–117 (2011)Google Scholar
  6. 6.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR, pp. 1–9 (2015)Google Scholar
  7. 7.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  8. 8.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. CoRR 3, pp. 212–223 (2012)Google Scholar
  9. 9.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  10. 10.
    Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. arXiv preprint arXiv:1606.04797 (2016)

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science CenterShenzhen UniversityShenzhenChina

Personalised recommendations