Skip to main content

Advertisement

Log in

Classification and Diagnosis of Thyroid Carcinoma Using Reinforcement Residual Network with Visual Attention Mechanisms in Ultrasound Images

  • Image & Signal Processing
  • Published:
Journal of Medical Systems Aims and scope Submit manuscript

Abstract

How to differentiate thyroid cancer nodules from a large number of benign nodules is always a challenging subject for clinicians. This paper proposes a novel Sal-deel network model to achieve the classification and diagnosis of thyroid cancer, which can simulate visual attention mechanism. The Sal-deep network introduces saliency map as an additional information on the deep residual network, which selectively enhances the feature extracted from different regions according to the mask map. Sal-deep network can work effectively for the benchmark networks with different data sets and different structures, and it is a universal network model. Sal-deep network increases the complexity of the network, but improves the efficiency of the network. A large number of qualitative and quantitative experiments show that our improved network is superior to other existing deep models in terms of classification accuracy rate and Recall, which is suitable for clinical application.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Gopinath, B., and Shanthi, N., Computer-aided diagnosis system for classifying benign and malignant thyroid nodules in multi-stained fnab cytological images. Australasian Physical & Engineering Sciences in Medicine 36(2):219–230, 2013.

    Article  Google Scholar 

  2. Li, H., Weng, J., Shi, Y., Gu, W., Mao, Y., Wang, Y. et al., An improved deep learning approach for detection of thyroid papillary cancer in ultrasound images. Sci. Rep. 8(1):6600–6612, 2018.

    Article  Google Scholar 

  3. Halicek, M., Lu, G. et al., Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J. Biomed. Opt. 22(6):496–503, 2017.

    Article  Google Scholar 

  4. Lecun, Y., Bengio, Y., and Hinton, G., Deep learning. Nature 51(7):436–461, 2015.

    Article  Google Scholar 

  5. Kereliuk, C., Sturm, B., and Larsen, J., Deep learning and music adversaries. IEEE Transactions on Multimedia 17(11):1–17, 2015.

    Article  Google Scholar 

  6. Kyathanahally, S. P., Dring, A., and Kreis, R., Deep learning approaches for detection and removal of ghosting artifacts in mr spectroscopy. Magn. Reson. Med. 24(6):12–23, 2018.

    Google Scholar 

  7. Hassan, Y. F., Deep learning architecture using rough sets and rough neural networks. Kybernetes. Knowledge-Based Systems 46(4):693–705, 2017.

    Google Scholar 

  8. Randle, R. W. et al., Trends in the presentation, treatment, and survival of patients with medullary thyroid cancer over the past 30 years. Magn. Reson. Med. 16(1):137–146, 2017.

    Google Scholar 

  9. Qian, P., Jiang, Y., Deng, Z., Lingzhi, H., Sun, S., Wang, S., and Jr, R. F. M., Cluster prototypes and fuzzy memberships jointly leveraged cross-domain maximum entropy clustering. IEEE Transactions on Cybernetics 46(1):181–193, 2016.

    Article  Google Scholar 

  10. Qian, P., Jiang, Y., Wang, S., Kuan-Hao, S., Wang, J., Lingzhi, H., and Jr, R. F. M., Affinity and penalty jointly constrained spectral clustering with all-compatibility, flexibility, and robustness. IEEE Transactions on Neural Networks and Learning Systems 28(5):1123–1138, 2017.

    Article  Google Scholar 

  11. Qian, P., Zhao, K., Jiang, Y., Su, K.-H., Deng, Z., Wang, S., and Muzic, Jr., R. F., Knowledge-leveraged transfer fuzzy c-means for texture image segmentation with self-adaptive cluster prototype matching. Knowl.-Based Syst. 130:33–50, 2017.

    Article  Google Scholar 

  12. Jiang, Y., Deng, Z., Chung, F.-L., Wang, G., Qian, P., Choi, K.-S., and Wang, S., Recognition of Epileptic EEG Signals Using a Novel Multi-View TSK Fuzzy System. IEEE Trans. Fuzzy Systems 25(1):3–20, 2017.

    Article  Google Scholar 

  13. Wang, Z., Ren, J., Zhang, D. et al., A Deep-Learning Based Feature Hybrid Framework for Spatiotemporal Saliency Detection inside Videos. Neurocomputing:S0925231218301097, 2018.

  14. Meijun, S., Ziqi, Z., Qinghua, H. et al., SG-FCN: A Motion and Memory-Based Deep Learning Model for Video Saliency Detection. IEEE Transactions on Cybernetics 2018:1–12.

  15. Jiang, Y., Chung, F.-L., Wang, S., Deng, Z., Wang, J., and Qian, P., Collaborative fuzzy clustering from multiple weighted views. IEEE Transactions on Cybernetics 45(4):688–701, 2015.

    Article  Google Scholar 

  16. Jiang, Y., Chung, F.-L., Ishibuchi, H. et al., Multitask TSK fuzzy system modeling by mining intertask common hidden structure. IEEE Transactions on Cybernetics 45(3):548–561, 2015.

    Article  Google Scholar 

  17. Xia, K.-J., Yin, H.-S., and Zhang, Y.-d., Deep Semantic Segmentation of Kidney and Space-Occupying Lesion Area Based on SCNN and ResNet Models Combined with SIFT-Flow Algorithm. J. Med. Syst. 43(23):122:143, 2019.

    Google Scholar 

  18. Xia, K. J., Yin, H. S., and Wang, J. Q., A novel improved deep convolutional neural network model for medical image fusion. Clust. Comput. 2018(3):1–13.

  19. Sun, M., Zhou, Z., Zhang, D. et al., Hybrid convolutional neural networks and optical flow for video visual attention prediction. Multimed. Tools Appl. 24(23):56–65, 2018.

    Google Scholar 

  20. Hou, W., and Gao, X., Saliency-guided deep framework for image quality assessment. IEEE MultiMedia 22(2):1–17, 2015.

    Article  Google Scholar 

  21. Xia, K., Wang, J., and Wu, Y., Robust Alzheimer Disease classification based on Feature Integration Fusion Model for Magnetic. Journal of Journal of medical imaging and health informatics 7:1–6, 2017.

    Article  Google Scholar 

  22. Xia, K., Yin, H., Qian, P., Jiang, Y., and Wang, S., Liver Semantic Segmentation Algorithm Based on Improved Deep Adversarial Networks in Combination of Weighted Loss Function on Abdominal CT Images. IEEE Access 7:96349–96358, 2019.

    Article  Google Scholar 

  23. Obeso, A. M., Vázquez, M. S. G., Acosta, A. Á. R. et al., Connoisseur: classification of styles of Mexican architectural heritage with deep learning and visual attention prediction. International Workshop.:780–783, 2018.

  24. Wang G, Wang W, Wang J, et al. Better deep visual attention with reinforcement learning in action recognition. 2017 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2017:19-23.

  25. Huang W, He D, Yang X, et al. Detecting Arbitrary Oriented Text in the Wild with a Visual Attention Model. Acm on Multimedia Conference. ACM, 2016:1208-1212.

  26. Zhao, B., Feng, J., Wu, X. et al., A survey on deep learning-based fine-grained object classification and semantic segmentation. Int. J. Autom. Comput. 14(2):119–135, 2017.

    Article  Google Scholar 

  27. Di Cataldo, S., and Ficarra, E., Mining textural knowledge in biological images: Applications, methods and trends. Computational and Structural Biotechnology Journal 15:56–67, 2017.

    Article  Google Scholar 

  28. Im, D. J., Kim, C. D., Jiang, H. et al., Generating images with recurrent adversarial networks. Neurocomputing 23(11):21–33, 2016.

    Google Scholar 

  29. Shameer, K., Badgeley, M. A., Miotto, R. et al., Translational bioinformatics in the era of real-time biomedical, health care and wellness data streams. Brief. Bioinform. 18(1):105–124, 2017.

    Article  Google Scholar 

  30. Fei-Yan, Z., Lin-Peng, J., and Jun, D., Review of Convolutional Neural Network. Chinese Journal of Computers 22(22):123–138, 2017.

    Google Scholar 

Download references

Acknowledgments

This work is financially supported by Zhejiang Provincial Fund Joint Fund of Mathematical and Physical Medical Association (LSY19H180010).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanming Zhang.

Ethics declarations

Conflict of interest

We declare that we have no conflict of interest.

Human and animal rights

The paper does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

Informed consent is obtained from all individual participants included in the study.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Image & Signal Processing

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y. Classification and Diagnosis of Thyroid Carcinoma Using Reinforcement Residual Network with Visual Attention Mechanisms in Ultrasound Images. J Med Syst 43, 323 (2019). https://doi.org/10.1007/s10916-019-1448-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10916-019-1448-5

Keywords

Navigation