Skip to main content
Log in

A segmentation-based sequence residual attention model for KRAS gene mutation status prediction in colorectal cancer

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In the computer-aided diagnosis of colorectal cancer, accurate determination of kratom sarcoma (KRAS) gene mutation status is crucial to provide better treatment for patients. In recent years, deep learning methods have excelled in computer vision. However, in the KRAS gene mutation status prediction, deep learning methods are usually designed for only classification tasks, ignoring the potential facilitation of segmentation tasks for classification tasks. In this paper, we propose a Segmentation-based Sequence Residual Attention Model (SSRAM) that facilitates the execution of the classification task by transferring the captured helpful information and the generated lesion masks in the segmentation task to the classification task, which in turn predicts the KRAS gene mutation status of the patient. This model consists of a Pixel Gated Segmentation Network (PG-SN) and a Channel Guided Classification Network (CG-CN). Specifically, PG-SN captures the segmentation features of lesions at different levels and generates lesion masks to provide CG-CN with prior guidance. After obtaining the precise localisation information of lesions provided by PG-SN, CG-CN shares encoders and decoders with PG-SN. That enables CG-CN to acquire the necessary high-level semantic features to enrich the classification features for accurate KRAS gene mutation status prediction. Meanwhile, to better optimise our SSRAM, we design a new boundary loss and use it jointly with Combo loss in PG-SN to solve the problem that the lesion boundary pixels are difficult to be classified correctly. We evaluate the proposed SSRAM on the T2-weighted MRI datasets and achieve an accuracy of 87.5% and an AUC of 94.74% in the KRAS gene mutation status prediction, which is superior to the performance of current non-invasive methods for predicting KRAS gene mutation status in colorectal cancer. The results suggest that our SSRAM, which accomplishes the classification task by segmentation task facilitates classification task, can effectively improve the performance and effectiveness of the classification task, thereby better helping physicians diagnose the KRAS gene mutation status of patients. The code is publicly available.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, Bray F (2021) Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA. Cancer J Clin 71(3):209–249

    Article  Google Scholar 

  2. Li Z-N, Zhao L, Yu L-F, Wei M-J (2020) Braf and kras mutations in metastatic colorectal cancer: future perspectives for personalized therapy. Gastroenterol Rep 8(3):192–205

    Article  Google Scholar 

  3. Cremolini C, Rossini D, Dell’Aquila E, Lonardi S, Conca E, Del Re M, Busico A, Pietrantonio F, Danesi R, Aprile G et al (2019) Rechallenge for patients with ras and braf wild-type metastatic colorectal cancer with acquired resistance to first-line cetuximab and irinotecan: a phase 2 single-arm clinical trial. JAMA Oncology 5(3):343–350

    Article  Google Scholar 

  4. Jo P, Bernhardt M, Nietert M, König A, Azizian A, Schirmer MA, Grade M, Kitz J, Reuter-Jessen K, Ghadimi M et al (2020) Kras mutation status concordance between the primary tumor and the corresponding metastasis in patients with rectal cancer. Plos one 15(10):0239806

    Article  Google Scholar 

  5. Ramón y Cajal S, Sesé M, Capdevila C, Aasen T, Mattos-Arruda D, Diaz-Cano SJ, Hernández-Losa J, Castellví J et al (2020) Clinical implications of intratumor heterogeneity: challenges and opportunities. J Mol Med 98(2):161–177

    Article  Google Scholar 

  6. Ferrer I, Zugazagoitia J, Herbertz S, John W, Paz-Ares L, Schmid-Bindert G (2018) Kras-mutant non-small cell lung cancer: From biology to therapy. Lung cancer 124:53–64

    Article  Google Scholar 

  7. Yang L, Dong D, Fang M, Zhu Y, Zang Y, Liu Z, Zhang H, Ying J, Zhao X, Tian J (2018) Can ct-based radiomics signature predict kras/nras/braf mutations in colorectal cancer? Eur Radiol 28(5):2058–2067

    Article  Google Scholar 

  8. Oh JE, Kim MJ, Lee J, Hur BY, Kim B, Kim DY, Baek JY, Chang HJ, Park SC, Oh JH et al (2020) Magnetic resonance-based texture analysis differentiating kras mutation status in rectal cancer. Cancer Res Treat: Official J Korean Cancer Assoc 52(1):51

    Article  Google Scholar 

  9. Xu Y, Xu Q, Ma Y, Duan J, Zhang H, Liu T, Li L, Sun H, Shi K, Xie S et al (2019) Characterizing mri features of rectal cancers with different kras status. BMC Cancer 19(1):1–11

    Article  Google Scholar 

  10. Cui Y, Liu H, Ren J, Du X, Xin L, Li D, Yang X, Wang D (2020) Development and validation of a mri-based radiomics signature for prediction of kras mutation in rectal cancer. Eur Radiol 30(4):1948–1958

    Article  Google Scholar 

  11. Suzuki K (2017) Overview of deep learning in medical imaging. Radiol Phys Technol 10(3):257–273

    Article  Google Scholar 

  12. Poudel S, Kim YJ, Vo DM, Lee S-W (2020) Colorectal disease classification using efficiently scaled dilation in convolutional neural network. IEEE Access 8:99227–99238

    Article  Google Scholar 

  13. He K, Liu X, Li M, Li X, Yang H, Zhang H (2020) Noninvasive kras mutation estimation in colorectal cancer using a deep learning method based on ct imaging. BMC Medical Imaging 20(1):1–9

    Article  Google Scholar 

  14. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 770–778

  15. Wu X, Li Y, Chen X, Huang Y, He L, Zhao K, Huang X, Zhang W, Huang Y, Li Y (2020) Deep learning features improve the performance of a radiomics signature for predicting kras status in patients with colorectal cancer. Acad Radiol 27(11):254–262

    Article  Google Scholar 

  16. Wang J, Cui Y, Shi G, Zhao J, Yang X, Qiang Y, Du Q, Ma Y, Kazihise NG-F (2020) Multi-branch cross attention model for prediction of kras mutation in rectal cancer with t2-weighted mri. Appl Intell 50(8):2352–2369

    Article  Google Scholar 

  17. Ma Y, Wang J, Song K, Qiang Y, Jiao X, Zhao J (2021) Spatial-frequency dual-branch attention model for determining kras mutation status in colorectal cancer with t2-weighted mri, vol 209

  18. Kong Z, He M, Luo Q, Huang X, Wei P, Cheng Y, Chen L, Liang Y, Lu Y, Li X et al (2021) Multi-task classification and segmentation for explicable capsule endoscopy diagnostics Frontiers in Molecular Biosciences 8

  19. Amyar A, Modzelewski R, Li H, Ruan S (2020) Multi-task deep learning based ct imaging analysis for covid-19 pneumonia: Classification and segmentation. Computers in Biology and Medicine 126:104037

    Article  Google Scholar 

  20. Le T-L-T, Thome N, Bernard S, Bismuth V, Patoureaux F (2019) Multitask classification and segmentation for cancer diagnosis in mammography. arXiv:1909.05397

  21. Wang H, Wang S, Qin Z, Zhang Y, Li R, Xia Y (2021) Triple attention learning for classification of 14 thoracic diseases using chest radiography. Med Image Anal 101846:67

    Google Scholar 

  22. Ruder S (2017) An overview of multi-task learning in deep neural networks. arXiv:1706.05098

  23. Xie Y, Zhang J, Xia Y, Shen C (2020) A mutual bootstrapping model for automated skin lesion segmentation and classification. IEEE Trans Med Imaging 39(7):2482–2493

    Article  Google Scholar 

  24. Chakravarty A, Sivswamy J (2018) A deep learning based joint segmentation and classification framework for glaucoma assesment in retinal color fundus images. arXiv:1808.01355

  25. Mehta S, Mercan E, Bartlett J, Weaver D, Elmore JG, Shapiro L (2018) Y-net: joint segmentation and classification for diagnosis of breast biopsy images. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, pp 893–901

  26. Qu H, Riedlinger G, Wu P, Huang Q, Yi J, De S, Metaxas D (2019) Joint segmentation and fine-grained classification of nuclei in histopathology images. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). IEEE, pp 900–904

  27. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 7132–7141

  28. Li X, Wang W, Hu X, Yang J (2019) Selective kernel networks. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 510–519

  29. Zhu X, Cheng D, Zhang Z, Lin S, Dai J (2019) An empirical study of spatial attention mechanisms in deep networks. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6688–6697

  30. Woo S, Park J, Lee J-Y, Kweon IS (2018) Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp 3–19

  31. Fu J, Liu J, Tian H, Li Y, Bao Y, Fang Z, Lu H (2019) Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3146–3154

  32. Hou Q, Zhou D, Feng J (2021) Coordinate attention for efficient mobile network design. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 13713– 13722

  33. Schlemper J, Oktay O, Schaap M, Heinrich M, Kainz B, Glocker B, Rueckert D (2019) Attention gated networks: Learning to leverage salient regions in medical images. Med Image Anal 53:197–207

    Article  Google Scholar 

  34. Zhang J, Xie Y, Xia Y, Shen C (2019) Attention residual learning for skin lesion classification. IEEE Trans Med Imaging 38(9):2092–2103

    Article  Google Scholar 

  35. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241

  36. Chen L-C, Papandreou G, Schroff F, Adam H (2017) Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587

  37. Ibtehaz N, Rahman MS (2020) Multiresunet: Rethinking the u-net architecture for multimodal biomedical image segmentation. Neural Netw 121:74–87

    Article  Google Scholar 

  38. Drozdzal M, Vorontsov E, Chartrand G, Kadoury S, Pal C (2016) The importance of skip connections in biomedical image segmentation. In: Deep learning and data labeling for medical applications. Springer, pp 179–187

  39. Zhang Z, Liu Q, Wang Y (2018) Road extraction by deep residual u-net. IEEE Geosci Remote Sens Lett 15(5):749–753

    Article  Google Scholar 

  40. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9

  41. He K, Zhang X, Ren S, Sun J (2016) Identity mappings in deep residual networks. In: European conference on computer vision. Springer, pp 630–645

  42. Iglovikov V, Shvets A (2018) Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv:1801.05746

  43. Alom MZ, Hasan M, Yakopcic C, Taha TM, Asari VK (2018) Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. arXiv:1802.06955

  44. Jha D, Smedsrud PH, Riegler MA, Johansen D, De Lange T, Halvorsen P, Johansen HD (2019) Resunet++: An advanced architecture for medical image segmentation. In: 2019 IEEE International Symposium on Multimedia (ISM). IEEE, pp 225–2255

  45. Lou A, Guan S, Loew MH (2021) Dc-unet: rethinking the u-net architecture with dual channel efficient cnn for medical image segmentation. Medical Imaging 2021: Image Processing 11596:115962. International Society for Optics and Photonics

    Google Scholar 

  46. Jha D, Riegler MA, Johansen D, Halvorsen P, Johansen HD (2020) Doubleu-net: A deep convolutional neural network for medical image segmentation. In: 2020 IEEE 33rd International Symposium on Computer-based Medical Systems (CBMS). IEEE, pp 558–564

  47. Taghanaki SA, Zheng Y, Zhou SK, Georgescu B, Sharma P, Xu D, Comaniciu D, Hamarneh G (2019) Combo loss: Handling input and output imbalance in multi-organ segmentation. Comput Med Imaging Graph 75:24–33

    Article  Google Scholar 

  48. Milletari F, Navab N, Ahmadi S-A (2016) V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). IEEE, pp 565–571

  49. Gu R, Wang G, Song T, Huang R, Aertsen M, Deprest J, Ourselin S, Vercauteren T, Zhang S (2020) Ca-net: Comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans Med Imaging 40(2):699–711

    Article  Google Scholar 

  50. Peng D, Xiong S, Peng W, Lu J (2021) Lcp-net: A local context-perception deep neural network for medical image segmentation. Expert Syst Appl 168:114234

    Article  Google Scholar 

  51. Wang Z, Zou Y, Liu PX (2021) Hybrid dilation and attention residual u-net for medical image segmentation, vol 134

  52. Singh D, Kumar V, Kaur M (2021) Densely connected convolutional networks-based covid-19 screening model. Appl Intell 51(5):3044–3051

    Article  Google Scholar 

  53. Vats A, Pedersen M, Mohammed A, Hovde Ø (2021) Learning more for free-a multi task learning approach for improved pathology classification in capsule endoscopy. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 3–13

  54. Lin L, Wang Z, Wu J, Huang Y, Lyu J, Cheng P, Wu J, Tang X (2021) Bsda-net: A boundary shape and distance aware joint learning framework for segmenting and classifying octa images. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 65–75

  55. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE International conference on computer vision, pp 2980–2988

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juanjuan Zhao.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Kai Song and Yulan Ma are contributed equally to this work

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, L., Song, K., Ma, Y. et al. A segmentation-based sequence residual attention model for KRAS gene mutation status prediction in colorectal cancer. Appl Intell 53, 10232–10254 (2023). https://doi.org/10.1007/s10489-022-04011-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-04011-3

Keywords

Navigation