Abstract
Purpose
Precise segmentation of organs or tumors is essential for diagnosis and prognosis.
Methods
We propose two novel improved end-to-end segmentation models, i.e. FBUNet-1 and FBUNet-2. The FBUNet-1 model shows higher performance by reducing the loss of spatial information in convolutional operations than the classic U-Net. The FBUNet-2 model can further increase accuracy by modifying the loss function based on the FBUNet-1 model. In this research, we compare the proposed models with the classic U-Net and deep residual U-Net models against four evaluation indexes, i.e. Dice coefficient, Jaccard similarity and Sensitivity and Precision respectively.
Results
The experimental results show that with a cut of almost one-third of the training parameters, the FBUNet-1 and FBUNet-2 models can still improve comprehensive performance in the cell edge segmentation, blood vessel segmentation, lung segmentation and cell nuclei segmentation. For example, the average Dice coefficient is 93.96%, Jaccard Similarity 88.62%, Sensitivity 94.19% and Precision 93.73% in cell segmentation. In addition, the average of fivefold cross validation of the proposed FBUNet-2 model increases by 0.5% of Jaccard Similarity, 0.3% of Dice coefficient and 0.9% of Jaccard Similarity, 0.6% of Dice coefficient for cell edge segmentation and cell nuclei segmentation compare with U-Net model.
Conclusion
Compared with deep residual U-Net and classic U-Net models, the FBUNet-1 and FBUNet-2 models have potential and practical clinical applications.
Similar content being viewed by others
References
Lee, Y., Hara, T., Fujita, H., Itoh, S., & Ishigaki, T. (2001). Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique. IEEE Transactions on Medical Imaging, 20(7), 595–604.
Codella, N. C. F., Gutman, D., Celebi, M. E., Helba, B., Marchetti, M. A., Dusza, S. W., Kalloo, A., Liopyris, K., Mishra, N., & Kittler, H. et al. (2018). Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) (pp. 168–172). IEEE.
Yang, J., Veeraraghavan, H., Armato III, S. G., Farahani, K., Kirby, J. S., Kalpathy-Kramer, J., et al. (2018). Autosegmentation for thoracic radiation treatment planning: A grand challenge at aapm 2017. Medical Physics, 45(10), 4568–4581.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097–1105).
Russakovsky, O., Deng, J., Hao, S., Krause, J., Satheesh, S., & Ma, S., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprintarXiv:1409.1556.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence.
He, K., & Sun, J. (2015). Convolutional neural networks at constrained time cost. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5353–5360).
Srivastava, R. K., Greff, K., & Schmidhuber, J. (2015). Highway networks. arXiv preprintarXiv:1505.00387.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770–778).
Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1492–1500).
Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4700–4708).
Huang, G., Sun, Y., Liu, Z., Sedra, D., & Weinberger, K. Q. (2016). Deep networks with stochastic depth. In European Conference on Computer Vision (pp. 646–661). Springer.
Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3431–3440).
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 234–241). Springer.
Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481–2495.
Curiale, A. H., Colavecchia, F. D., Kaluza, P., Isoardi, R. A., & Mato, G. (2017). Automatic myocardial segmentation by using a deep learning network in cardiac MRI. In 2017 XLIII Latin American Computer Conference (CLEI) (pp. 1–6). IEEE.
Alom, M. Z., Hasan, M., Yakopcic, C., Taha, T. M., & Asari, V. K .(2018). Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. arXiv preprintarXiv:1802.06955.
Zaiwang, G., Cheng, J., Huazhu, F., Zhou, K., Hao, H., & Zhao, Y., et al. (2019). Ce-net: Context encoder network for 2d medical image segmentation. IEEE Transactions on Medical Imaging.
Ibtehaz, N., & Rahman, M. S. (2020). Multiresunet: Rethinking the u-net architecture for multimodal biomedical image segmentation. Neural Networks, 121, 74–87.
Ghiasi, G., Lin, T.-Y., & Le, Q. V. (2018). Dropblock: A regularization method for convolutional networks. In Advances in Neural Information Processing Systems (pp. 10727–10737).
Wu, Y., & He, K. (2018). Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3–19).
Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132–7141).
François C. et al. (2015). Keras.
Benson, G., & Fedosov, A. (2007). Python-based distributed programming with trickle. In PDPTA (pp. 30–36).
He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2961–2969).
Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprintarXiv:1502.03167.
Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. arXiv preprintarXiv:1607.06450.
Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv preprintarXiv:1607.08022.
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., & Isard, M. et al. (2016). Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16) (pp. 265–283).
Funding
This study was funded by the National Natural Science Foundation of China (81973749). It was also supported by the Shanghai Municipal Commission of Health and Family Planning Scientific Research General Project (201740093) and the TCM Guidance Project of Shanghai Science and Technology Commission (18401903600).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All the Authors of this article have no conflict of interest.
Ethical Approval
This article does not contain any studies with animals performed by any of the authors, and does not also contain any studies with human participants or animals performed by any of the authors.
Rights and permissions
About this article
Cite this article
Wang, H., Yang, J. FBUNet: Full Convolutional Network Based on Fusion Block Architecture for Biomedical Image Segmentation. J. Med. Biol. Eng. 41, 185–202 (2021). https://doi.org/10.1007/s40846-020-00583-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40846-020-00583-y