Skip to main content

Anatomical-Functional Fusion Network for Lesion Segmentation Using Dual-View CEUS

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14177))

Included in the following conference series:

  • 573 Accesses

Abstract

Dual-view contrast-enhanced ultrasound (CEUS) has been widely applied in lesion detection and characterization due to the provided anatomical and functional information of lesions. Accurate delineation of lesion contour is important to assess lesion morphology and perfusion dynamics. Although the last decade has witnessed the unprecedented progress of deep learning methods in 2D ultrasound imaging segmentation, there are few attempts to discriminate tissue perfusion discrepancy using dynamic CEUS imaging. Combined with the side-by-side gray-scale US view, we propose a novel anatomical-functional fusion network (AFF-Net) to fuse complementary imaging characteristics from dual-view dynamic CEUS imaging. Towards a comprehensive characterization of lesions, our method mainly tackles with two challenges: 1) how to effectively represent and aggregate enhancement features of the dynamic CEUS view; 2) how to efficiently fuse them with the morphology features of the US view. Correspondingly, we design the channel-wise perfusion (PE) gate and anatomical-functional fusion (AFF) module with the goal to exploit dynamic blood flow characteristics and perform layer-level fusion of the two modalities, respectively. The effectiveness of the AFF-Net method on lesion segmentation is validated on our collected thyroid nodule dataset with superior performance compared with existing methods.

P. Wan and C. Liu—Contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Haugen, B.R., Alexander, E.K., Bible, K.C., et al.: 2015 American thyroid association management guidelines for adult patients with thyroid nodules and differentiated thyroid cancer: the American thyroid association guidelines task force on thyroid nodules and differentiated thyroid cancer. Thyroid 26(1), 1–133 (2016)

    Article  Google Scholar 

  2. Liang, X.W., Cai, Y.Y., Yu, J.S., Liao, J.Y., Chen, Z.Y.: Update on thyroid ultrasound: a narrative review from diagnostic criteria to artificial intelligence techniques. Chin. Med. J. 132(16), 1974–1982 (2019)

    Article  Google Scholar 

  3. Wang, M., Sun, P., Zhao, X., Sun, Y.: Ultrasound parameters of thyroid nodules and the risk of malignancy: a retrospective analysis. Cancer Control 27(1), 1073274820945976 (2020)

    Article  Google Scholar 

  4. Ha, E.J., Na, D.G., Baek, J.H., Sung, J.Y., Kim, J., et al.: US fine-needle aspiration biopsy for thyroid malignancy: diagnostic performance of seven society guidelines applied to 2000 thyroid nodules. Radiology 287(3), 893–900 (2018)

    Article  Google Scholar 

  5. Kant, R., Davis, A., Verma, V.: Thyroid nodules: advances in evaluation and management. Am. Fam. Physician 102(5), 298–304 (2020)

    Google Scholar 

  6. Sorrenti, S., Dolcetti, V., Fresilli, D., et al.: The role of CEUS in the evaluation of thyroid cancer: from diagnosis to local staging. J. Clin. Med. 10(19), 4559 (2021)

    Article  Google Scholar 

  7. Radzina, M., Ratniece, M., Putrins, D.S., Saule, L., Cantisani, V.: Performance of contrast-enhanced ultrasound in thyroid nodules: review of current state and future perspectives. Cancers 13(21), 5469 (2021)

    Article  Google Scholar 

  8. Ma, J., Wu, F., Jiang, T., et al.: Ultrasound image-based thyroid nodule automatic segmentation using convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 12, 1895–1910 (2017)

    Article  Google Scholar 

  9. Koundal, D., Sharma, B., Guo, Y.: Intuitionistic based segmentation of thyroid nodules in ultrasound images. Comput. Biol. Med. 121, 103776 (2020)

    Article  Google Scholar 

  10. Mahmood, N.H., Rusli, A.H.: Segmentation and area measurement for thyroid ultrasound image. Int. J. Sci. Eng. Res. 2(12), 1–8 (2011)

    Google Scholar 

  11. Mi, S., Bao, Q., Wei, Z., Xu, F., Yang, W.: MBFF-Net: multi-branch feature fusion network for carotid plaque segmentation in ultrasound. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12905, pp. 313–322. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87240-3_30

    Chapter  Google Scholar 

  12. Li, H., et al.: Contrastive rendering for ultrasound image segmentation. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 563–572. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_54

    Chapter  Google Scholar 

  13. Lu, J., Ouyang, X., Liu, T., Shen, D.: Identifying thyroid nodules in ultrasound images through segmentation-guided discriminative localization. In: Shusharina, N., Heinrich, M.P., Huang, R. (eds.) MICCAI 2020. LNCS, vol. 12587, pp. 135–144. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-71827-5_18

    Chapter  Google Scholar 

  14. Lu, J., Yang, J., Batra, D., et al.: Hierarchical question-image co-attention for visual question answering. In: 29th International Proceedings on Advances in Neural Information Processing Systems, Barcelona, Spain. Curran Associates Inc. (2016)

    Google Scholar 

  15. Liu, Y., Zhang, X., Zhang, Q., et al.: Dual self-attention with co-attention networks for visual question answering. Pattern Recogn. 117, 107956 (2021)

    Article  Google Scholar 

  16. Aspert, N., Santa-Cruz, D., Ebrahimi, T.: Mesh: measuring errors between surfaces using the hausdorff distance. In: 29th IEEE International Conference on Multimedia and Expo, Lausanne, Switzerland, pp. 705–708. IEEE (2002)

    Google Scholar 

  17. Joze, H.R.V., Shaban, A., Iuzzolino, M.L., et al.: MMTM: multimodal transfer module for CNN fusion. In: 33th IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13289–13299. IEEE (2020)

    Google Scholar 

  18. Liang, X., Lin, L., Cao, Q., Huang, R., Wang, Y.: Recognizing focal liver lesions in CEUS with dynamically trained latent structured models. IEEE Trans. Med. Imaging 35(3), 713–27 (2016)

    Article  Google Scholar 

  19. Nguyen, D.K., Okatani, T.: Improved fusion of visual and language representations by dense symmetric co-attention for visual question answering. In: 32th Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, pp. 6087–6096. IEEE (2018)

    Google Scholar 

  20. Yu, Z., Yu, J., Cui, Y., Tao, D., Tian, Q.: Deep modular co-attention networks for visual question answering. In: 32th Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, pp. 6274–6283. IEEE (2019)

    Google Scholar 

  21. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  22. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  23. Zheng, Q., Delingette, H., Duchateau, N., et al.: 3-D consistent and robust segmentation of cardiac images by deep learning with spatial propagation. IEEE Trans. Med. Imaging 37(9), 2137–2148 (2018)

    Article  Google Scholar 

  24. Zhou, S., Wu, H., Gong, J., et al.: Mark-guided segmentation of ultrasonic thyroid nodules using deep learning. In: Proceedings of the 2nd International Symposium on Image Computing and Digital Medicine, pp. 21–26 (2018)

    Google Scholar 

  25. Oktay, O., Schlemper, J., Folgoc, L.L., et al.: Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)

  26. Lin, G., Shen, C., Van Den Hengel, A., et al.: Efficient piecewise training of deep structured models for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3194–3203 (2016)

    Google Scholar 

  27. Chen, L.C., Yang, Y., Wang, J., et al.: Attention to scale: scale-aware semantic image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3640–3649 (2016)

    Google Scholar 

  28. Zhao, H., Shi, J., Qi, X., et al.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2881–2890 (2017)

    Google Scholar 

  29. Qin, Y., et al.: Autofocus layer for semantic segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11072, pp. 603–611. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00931-1_69

    Chapter  Google Scholar 

  30. Duta, I.C., Liu, L., Zhu, F., et al.: Pyramidal convolution: Rethinking convolutional neural networks for visual recognition. arXiv preprint arXiv:2006.11538 (2020)

  31. Ni, J., Wu, J., Tong, J., et al.: GC-Net: global context network for medical image segmentation. Comput. Methods Programs Biomed. 190, 105121 (2020)

    Article  Google Scholar 

  32. Zheng, S., Lu, J., Zhao, H., et al.: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6881–6890 (2021)

    Google Scholar 

  33. Kumar, A., Fulham, M., Feng, D., et al.: Co-learning feature fusion maps from PET-CT images of lung cancer. IEEE Trans. Med. Imaging 39(1), 204–217 (2019)

    Article  Google Scholar 

  34. Zhong, Z., Kim, Y., Zhou, L., et al.: 3D fully convolutional networks for co-segmentation of tumors on PET-CT images. In: Proceeding of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 228–231. IEEE (2018)

    Google Scholar 

  35. Zhao, X., Li, L., Lu, W., et al.: Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network. Phys. Med. Biol. 64(1), 015011 (2018)

    Article  Google Scholar 

  36. Zhang, W., Li, R., Deng, H., et al.: Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. Neuroimage 108, 214–224 (2015)

    Article  Google Scholar 

  37. Yang, X., Molchanov, P., Kautz, J.: Multilayer and multimodal fusion of deep neural networks for video classification. In: Proceedings of the 24th ACM International Conference on Multimedia, pp. 978–987 (2016)

    Google Scholar 

  38. Joze, H.R.V., Shaban, A., Iuzzolino, M.L., et al.: MMTM: multimodal transfer module for CNN fusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13289–13299 (2020)

    Google Scholar 

  39. Dolz, J., Gopinath, K., Yuan, J., et al.: HyperDense-Net: a hyper-densely connected CNN for multi-modal image segmentation. IEEE Trans. Med. Imaging 38(5), 1116–1126 (2018)

    Article  Google Scholar 

  40. Li, C., Sun, H., Liu, Z., Wang, M., Zheng, H., Wang, S.: Learning cross-modal deep representations for multi-modal MR image segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 57–65. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_7

    Chapter  Google Scholar 

Download references

Acknowledgement

This work was supported by the National Natural Science Foundation of China (Nos. 62136004, 62276130, 61732006, 61876082), and also by the Key Research and Development Plan of Jiangsu Province (No. BE2022842).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daoqiang Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wan, P., Liu, C., Zhang, D. (2023). Anatomical-Functional Fusion Network for Lesion Segmentation Using Dual-View CEUS. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14177. Springer, Cham. https://doi.org/10.1007/978-3-031-46664-9_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46664-9_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46663-2

  • Online ISBN: 978-3-031-46664-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics