Skip to main content

Advertisement

Log in

FBUNet: Full Convolutional Network Based on Fusion Block Architecture for Biomedical Image Segmentation

  • Original Article
  • Published:
Journal of Medical and Biological Engineering Aims and scope Submit manuscript

Abstract

Purpose

Precise segmentation of organs or tumors is essential for diagnosis and prognosis.

Methods

We propose two novel improved end-to-end segmentation models, i.e. FBUNet-1 and FBUNet-2. The FBUNet-1 model shows higher performance by reducing the loss of spatial information in convolutional operations than the classic U-Net. The FBUNet-2 model can further increase accuracy by modifying the loss function based on the FBUNet-1 model. In this research, we compare the proposed models with the classic U-Net and deep residual U-Net models against four evaluation indexes, i.e. Dice coefficient, Jaccard similarity and Sensitivity and Precision respectively.

Results

The experimental results show that with a cut of almost one-third of the training parameters, the FBUNet-1 and FBUNet-2 models can still improve comprehensive performance in the cell edge segmentation, blood vessel segmentation, lung segmentation and cell nuclei segmentation. For example, the average Dice coefficient is 93.96%, Jaccard Similarity 88.62%, Sensitivity 94.19% and Precision 93.73% in cell segmentation. In addition, the average of fivefold cross validation of the proposed FBUNet-2 model increases by 0.5% of Jaccard Similarity, 0.3% of Dice coefficient and 0.9% of Jaccard Similarity, 0.6% of Dice coefficient for cell edge segmentation and cell nuclei segmentation compare with U-Net model.

Conclusion

Compared with deep residual U-Net and classic U-Net models, the FBUNet-1 and FBUNet-2 models have potential and practical clinical applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

Notes

  1. http://brainiac2.mit.edu/isbi_challenge/.

  2. https://drive.grand-challenge.org/.

  3. https://www.kaggle.com/kmader/%EF%AC%81nding-lungs-in-ct-data/data/.

  4. https://www.kaggle.com/c/data-science-bowl-2018.

References

  1. Lee, Y., Hara, T., Fujita, H., Itoh, S., & Ishigaki, T. (2001). Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique. IEEE Transactions on Medical Imaging, 20(7), 595–604.

    Article  Google Scholar 

  2. Codella, N. C. F., Gutman, D., Celebi, M. E., Helba, B., Marchetti, M. A., Dusza, S. W., Kalloo, A., Liopyris, K., Mishra, N., & Kittler, H. et al. (2018). Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) (pp. 168–172). IEEE.

  3. Yang, J., Veeraraghavan, H., Armato III, S. G., Farahani, K., Kirby, J. S., Kalpathy-Kramer, J., et al. (2018). Autosegmentation for thoracic radiation treatment planning: A grand challenge at aapm 2017. Medical Physics, 45(10), 4568–4581.

    Article  Google Scholar 

  4. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097–1105).

  5. Russakovsky, O., Deng, J., Hao, S., Krause, J., Satheesh, S., & Ma, S., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.

    Article  MathSciNet  Google Scholar 

  6. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprintarXiv:1409.1556.

  7. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

  8. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

  9. Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence.

  10. He, K., & Sun, J. (2015). Convolutional neural networks at constrained time cost. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5353–5360).

  11. Srivastava, R. K., Greff, K., & Schmidhuber, J. (2015). Highway networks. arXiv preprintarXiv:1505.00387.

  12. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770–778).

  13. Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1492–1500).

  14. Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4700–4708).

  15. Huang, G., Sun, Y., Liu, Z., Sedra, D., & Weinberger, K. Q. (2016). Deep networks with stochastic depth. In European Conference on Computer Vision (pp. 646–661). Springer.

  16. Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3431–3440).

  17. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 234–241). Springer.

  18. Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481–2495.

    Article  Google Scholar 

  19. Curiale, A. H., Colavecchia, F. D., Kaluza, P., Isoardi, R. A., & Mato, G. (2017). Automatic myocardial segmentation by using a deep learning network in cardiac MRI. In 2017 XLIII Latin American Computer Conference (CLEI) (pp. 1–6). IEEE.

  20. Alom, M. Z., Hasan, M., Yakopcic, C., Taha, T. M., & Asari, V. K .(2018). Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. arXiv preprintarXiv:1802.06955.

  21. Zaiwang, G., Cheng, J., Huazhu, F., Zhou, K., Hao, H., & Zhao, Y., et al. (2019). Ce-net: Context encoder network for 2d medical image segmentation. IEEE Transactions on Medical Imaging.

  22. Ibtehaz, N., & Rahman, M. S. (2020). Multiresunet: Rethinking the u-net architecture for multimodal biomedical image segmentation. Neural Networks, 121, 74–87.

    Article  Google Scholar 

  23. Ghiasi, G., Lin, T.-Y., & Le, Q. V. (2018). Dropblock: A regularization method for convolutional networks. In Advances in Neural Information Processing Systems (pp. 10727–10737).

  24. Wu, Y., & He, K. (2018). Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3–19).

  25. Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132–7141).

  26. François C. et al. (2015). Keras.

  27. Benson, G., & Fedosov, A. (2007). Python-based distributed programming with trickle. In PDPTA (pp. 30–36).

  28. He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2961–2969).

  29. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprintarXiv:1502.03167.

  30. Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. arXiv preprintarXiv:1607.06450.

  31. Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv preprintarXiv:1607.08022.

  32. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., & Isard, M. et al. (2016). Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16) (pp. 265–283).

Download references

Funding

This study was funded by the National Natural Science Foundation of China (81973749). It was also supported by the Shanghai Municipal Commission of Health and Family Planning Scientific Research General Project (201740093) and the TCM Guidance Project of Shanghai Science and Technology Commission (18401903600).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to JingDong Yang.

Ethics declarations

Conflict of interest

All the Authors of this article have no conflict of interest.

Ethical Approval

This article does not contain any studies with animals performed by any of the authors, and does not also contain any studies with human participants or animals performed by any of the authors.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Yang, J. FBUNet: Full Convolutional Network Based on Fusion Block Architecture for Biomedical Image Segmentation. J. Med. Biol. Eng. 41, 185–202 (2021). https://doi.org/10.1007/s40846-020-00583-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40846-020-00583-y

Keywords

Navigation