Deep transfer learning methods for colon cancer classification in confocal laser microscopy images
The gold standard for colorectal cancer metastases detection in the peritoneum is histological evaluation of a removed tissue sample. For feedback during interventions, real-time in vivo imaging with confocal laser microscopy has been proposed for differentiation of benign and malignant tissue by manual expert evaluation. Automatic image classification could improve the surgical workflow further by providing immediate feedback.
We analyze the feasibility of classifying tissue from confocal laser microscopy in the colon and peritoneum. For this purpose, we adopt both classical and state-of-the-art convolutional neural networks to directly learn from the images. As the available dataset is small, we investigate several transfer learning strategies including partial freezing variants and full fine-tuning. We address the distinction of different tissue types, as well as benign and malignant tissue.
We present a thorough analysis of transfer learning strategies for colorectal cancer with confocal laser microscopy. In the peritoneum, metastases are classified with an AUC of 97.1, and in the colon the primarius is classified with an AUC of 73.1. In general, transfer learning substantially improves performance over training from scratch. We find that the optimal transfer learning strategy differs for models and classification tasks.
We demonstrate that convolutional neural networks and transfer learning can be used to identify cancer tissue with confocal laser microscopy. We show that there is no generally optimal transfer learning strategy and model as well as task-specific engineering is required. Given the high performance for the peritoneum, even with a small dataset, application for intraoperative decision support could be feasible.
KeywordsColon cancer Confocal laser microscopy Transfer learning Convolution neural network
Compliance with ethical standards
Conflict of Interest
The authors declare that they have no conflict of interest.
All procedures performed in studies involving animals were in accordance with the ethical standards of the institution or practice at which the studies were conducted.
Informed consent was obtained from all individual participants included in the study.
- 3.Franko J, Shi Q, Goldman CD, Pockaj BA, Nelson GD, Goldberg RM, Pitot HC, Grothey A, Alberts SR, Sargent DJ (2012) Treatment of colorectal peritoneal carcinomatosis with systemic chemotherapy: a pooled analysis of north central cancer treatment group phase III trials N9741 and N9841. J Clin Oncol 30(3):263CrossRefGoogle Scholar
- 12.Goceri E, Goceri N (2017) Deep learning in medical image analysis: recent advances and future trends. In: International conferences computer graphics, visualization, computer vision and image processing, pp 305–311Google Scholar
- 15.Bengio Y (2012) Deep learning of representations for unsupervised and transfer learning. In: Proceedings of ICML workshop on unsupervised and transfer learning, pp 17–36Google Scholar
- 23.Wiltgen M, Bloice M (2016) Automatic interpretation of melanocytic images in confocal laser scanning microscopy. In: Microscopy and analysis. InTechGoogle Scholar
- 24.Hong J, Park By, Park H (2017) Convolutional neural network classifier for distinguishing Barrett’s esophagus and neoplasia endomicroscopy images. In: 2017 39th annual international conference of the IEEE engineering in medicine and biology society (EMBC), IEEE, pp 2892–2895Google Scholar
- 26.Izadyyazdanabadi M, Belykh E, Martirosyan N, Eschbacher J, Nakaji P, Yang Y, Preul MC (2017) Improving utility of brain tumor confocal laser endomicroscopy: objective value assessment and diagnostic frame detection with convolutional neural networks. In: Medical imaging 2017: computer-aided diagnosis, vol. 10134. International Society for Optics and Photonics, p 101342JGoogle Scholar
- 27.Izadyyazdanabadi M, Belykh E, Cavallo C, Zhao X, Gandhi S, Moreira LB, Eschbacher J, Nakaji P, Preul MC, Yang Y (2018) Weakly-supervised learning-based feature localization for confocal laser endomicroscopy glioma images. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 300–308Google Scholar
- 29.Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556
- 30.Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: CVPR, pp 2818–2826Google Scholar
- 31.Huang G, Liu Z, Weinberger KQ, van der Maaten L (2016) Densely connected convolutional networks. arXiv preprint arXiv:1608.06993
- 32.Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141Google Scholar
- 33.Gessert N, Wittig L, Drömann D, Keck T, Schlaefer A, Ellebrecht DB (2019) Feasibility of colon cancer detection in confocal laser microscopy images using convolution neural networks. In: Bildverarbeitung für die Medizin 2019Google Scholar
- 34.Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICMLGoogle Scholar
- 35.He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: CVPR, pp 770–778Google Scholar
- 36.Nair V, Hinton GE (2010) Rectified linear units improve restricted Boltzmann machines. In: ICML, pp 807–814Google Scholar
- 37.Xie S, Girshick R, Dollár P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 5987–5995Google Scholar
- 39.Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? In: Advances in neural information processing systems, pp 3320–3328Google Scholar