Synergistic Combination of Learned and Hand-Crafted Features for Prostate Lesion Classification in Multiparametric Magnetic Resonance Imaging

  • Davood KarimiEmail author
  • Dan Ruan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10435)


In this paper, we propose and evaluate a new method for classifying between malignant and benign prostate cancer lesions in multiparametric magnetic resonance imaging (MRI). We show that synergistically combining automatically-learned and handcrafted features can significantly improve the classification performance. Our method utilizes features extracted from convolutional neural networks (CNNs), texture features learned via a discriminative sparsity-regularized approach, and hand-crafted statistical features. To assess the efficacy of different feature sets, we use AdaBoost with decision trees to classify prostate cancer lesions using different sets of features. CNN-derived, texture, and statistical features achieved area under the receiver operating characteristic curve (AUC) of 0.75, 0.68, and 0.70, respectively. Augmenting CNN features with texture and statistical features increased the AUC to 0.84 and 0.82, respectively. Combining all three feature types led to an AUC of 0.87. Our results indicate that in medical applications where training data is scarce, the classification performance achieved by CNNs or sparsity-regularized classification methods alone can be sub-optimal. Alternatively, one can treat these methods as implicit feature extraction mechanisms and combine their learned features with hand-crafted features using meta-classifiers to obtain superior classification performance.


Multi-parametric Magnetic Resonance Imaging Handcrafted Features Prostate Cancer Lesions Convolutional Neural Network (CNN) AdaBoost 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Barentsz, J.O., Richenberg, J., Clements, R., Choyke, P., Verma, S., Villeirs, G., Rouviere, O., Logager, V., Fütterer, J.J.: ESUR prostate MR guidelines 2012. Eur. Radiol. 22(4), 746–757 (2012)CrossRefGoogle Scholar
  2. 2.
    Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y., Pal, C., Jodoin, P.M., Larochelle, H.: Brain tumor segmentation with deep neural networks. Med. Image Anal. 35, 18–31 (2017)CrossRefGoogle Scholar
  3. 3.
    Kitajima, K., Kaji, Y., Fukabori, Y., Yoshida, K.L., Suganuma, N., Sugimura, K.: Prostate cancer detection with 3T MRI: comparison of diffusion-weighted imaging and dynamic contrast-enhanced MRI in combination with T2-weighted imaging. J. Magn. Reson. Imaging 31(3), 625–631 (2010)CrossRefGoogle Scholar
  4. 4.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)Google Scholar
  5. 5.
    Lematre, G., Mart, R., Freixenet, J., Vilanova, J.C., Walker, P.M., Meriaudeau, F.: Computer-aided detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: a review. Comput. Biol. Med. 60, 8–31 (2015)CrossRefGoogle Scholar
  6. 6.
    Litjens, G., Debats, O., Barentsz, J., Karssemeijer, N., Huisman, H.: Computer-aided detection of prostate cancer in MRI. IEEE Trans. Med. Imaging 33(5), 1083–1092 (2014)CrossRefGoogle Scholar
  7. 7.
    Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A.: Discriminative learned dictionaries for local image analysis. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2008)Google Scholar
  8. 8.
    Roth, H.R., Lu, L., Liu, J., Yao, J., Seff, A., Cherry, K., Kim, L., Summers, R.M.: Improving computer-aided detection using convolutional neural networks and random view aggregation. IEEE Trans. Med. Imaging 35(5), 1170–1181 (2016)CrossRefGoogle Scholar
  9. 9.
    Roth, H.R., Lu, L., Seff, A., Cherry, K.M., Hoffman, J., Wang, S., Liu, J., Turkbey, E., Summers, R.M.: A new 2.5D representation for lymph node detection using random sets of deep convolutional neural network observations. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8673, pp. 520–527. Springer, Cham (2014). doi: 10.1007/978-3-319-10404-1_65 CrossRefGoogle Scholar
  10. 10.
    Skretting, K., Engan, K.: Learned dictionaries for sparse image representation: properties and results. In: SPIE Optical Engineering+Applications, p. 81381N. International Society for Optics and Photonics (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Radiation OncologyUniversity of CaliforniaLos AngelesUSA

Personalised recommendations