Advertisement

DeepSplit: Segmentation of Microscopy Images Using Multi-task Convolutional Networks

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1248)

Abstract

Accurate segmentation of cellular structures is critical for automating the analysis of microscopy data. Advances in deep learning have facilitated extensive improvements in semantic image segmentation. In particular, U-Net, a model specifically developed for biomedical image data, performs multi-instance segmentation through pixel-based classification. However, approaches based on U-Net tend to merge touching cells in dense cell cultures, resulting in under-segmentation. To address this issue, we propose DeepSplit; a multi-task convolutional neural network architecture where one encoding path splits into two decoding branches. DeepSplit first learns segmentation masks, then explicitly learns the more challenging cell-cell contact regions. We test our approach on a challenging dataset of cells that are highly variable in terms of shape and intensity. DeepSplit achieves 90% cell detection coefficient and 90% Dice Similarity Coefficient (DSC) which is a significant improvement on the state-of-the-art U-Net that scored 70% and 84% respectively.

References

  1. 1.
    Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation (2015). arXiv:1511.00561
  2. 2.
    Böhm, A., Ücker, A., Jäger, T., Ronneberger, O., Falk, T.: ISOODL: instance segmentation of overlapping biological objects using deep learning. In: Proceedings of International Symposium on Biomedical Imaging. IEEE (2018)Google Scholar
  3. 3.
    Caicedo, J., et al.: Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. J. Quant. Sci. 95(9), 952–965 (2019) Google Scholar
  4. 4.
    Chen, H., Qi, X., Yu, L., Heng, P.A.: DCAN: deep contour-aware networks for accurate gland segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 2016, pp. 2487–2496 (2016)Google Scholar
  5. 5.
    Dima, A.A., et al.: Comparison of segmentation algorithms for fluorescence microscopy images of cells. J. Quant. Cell Sci. 79A(7), 545–559 (2011)Google Scholar
  6. 6.
    Fan, M., Rittscher, J.: Global probabilistic models for enhancing segmentation with convolutional networks. In: 2018 IEEE 15th International Symposium on Biomedical Imaging, pp. 1234–1238. IEEE (2018)Google Scholar
  7. 7.
    Fu, J., Liu J., Wang, Y., Zhou, J., Wang, C., Lu, H.: Stacked deconvolutional network for semantic segmentation. IEEE Trans. Image Process. (2019)Google Scholar
  8. 8.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2017, pp. 2961–2969 (2017)Google Scholar
  9. 9.
    Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)Google Scholar
  10. 10.
    J’egou, S., Drozdzal, M., Vazquez, D., Romero, A., Bengio, Y.: The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 11–19 (2017)Google Scholar
  11. 11.
    Luxenburg, C., Zaidel-Bar, R.: From cell shape to cell fate via the cytoskeleton - insights from the epidermis. Exp. Cell Res. 378(2), 232–237 (2019).  https://doi.org/10.1016/j.yexcr.2019.03.016CrossRefGoogle Scholar
  12. 12.
    Milletari, F., Navab, N., Ahmadi, S.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: Proceedings of International Conference on 3D Vision, pp. 565–571 (2016)Google Scholar
  13. 13.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  14. 14.
    Sailem, H., Rittscher, J., Pelkmans, L.: KCML: a machine-learning framework for inference of multi-scale gene functions from genetic perturbation screens. Mol. Syst. Biol. 16(3), e9083 (2020)CrossRefGoogle Scholar
  15. 15.
    Simard, P.Y., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of Seventh International Conference on Document Analysis and Recognition, pp. 958–963 (2003)Google Scholar
  16. 16.
    Taghanaki, S., Abhishek, K., Cohen, J., Cohen-Adad, J., Hamarneh, G.: Deep semantic segmentation of natural and medical images: a review (2019). arXiv preprint arXiv:1910.07655
  17. 17.
    Vicar, T., et al.: Cell segmentation methods for label-free contrast microscopy: review and comprehensive comparison. BMC Bioinf. 20(1), 360 (2019).  https://doi.org/10.1186/s12859-019-2880-8CrossRefGoogle Scholar
  18. 18.
    Wu, Z., Shen, C., Van Den Hengel, A.: Wider or deeper: revisiting the resnet model for visual recognition. Pattern Recogn. 90, 119–133 (2019)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Engineering ScienceUniversity of OxfordOxfordUK
  2. 2.Centre for Biosensors, Bioelectronics and BiodevicesUniversity of BathBathUK

Personalised recommendations