Representation Learning for Cross-Modality Classification

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10081)

Abstract

Differences in scanning parameters or modalities can complicate image analysis based on supervised classification. This paper presents two representation learning approaches, based on autoencoders, that address this problem by learning representations that are similar across domains. Both approaches use, next to the data representation objective, a similarity objective to minimise the difference between representations of corresponding patches from each domain. We evaluated the methods in transfer learning experiments on multi-modal brain MRI data and on synthetic data. After transforming training and test data from different modalities to the common representations learned by our methods, we trained classifiers for each of pair of modalities. We found that adding the similarity term to the standard objective can produce representations that are more similar and can give a higher accuracy in these cross-modality classification experiments.

Keywords

Representation learning Transfer learning Autoencoders Deep learning Multi-modal image analysis 

References

  1. 1.
    Patel, V.M., Gopalan, R., Li, R., Chellappa, R.: Visual domain adaptation: a survey of recent advances. IEEE Signal Process. Mag. 32, 53–69 (2015)CrossRefGoogle Scholar
  2. 2.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. Technical report, Université de Montréal (2012)Google Scholar
  3. 3.
    Shin, H.C., Roth, H.R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D., Summers, R.M.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35, 1285–1298 (2016)Google Scholar
  4. 4.
    Tajbakhsh, N., Shin, J.Y., Gurudu, S.R., Hurst, R.T., Kendall, C.B., Gotway, M.B., Liang, J.: Convolutional neural networks for medical image analysis: fine tuning or full training? IEEE Trans. Med. Imaging 35, 1299–1312 (2016)CrossRefGoogle Scholar
  5. 5.
    Simo-Serra, E., Trulls, E., Ferraz, L., et al.: Discriminative learning of deep convolutional feature point descriptors. In: ICCV (2015)Google Scholar
  6. 6.
    Menze, B.H., Jakab, A., Bauer, S., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34, 1993–2024 (2015)CrossRefGoogle Scholar
  7. 7.
    Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009). http://www.nowpublishers.com/article/Details/MAL-006
  8. 8.
    Hinton, G.E.: A practical guide to training restricted Boltzmann machines. Technical report, University of Toronto (2010)Google Scholar
  9. 9.
    Jakab, A.: Segmenting brain tumors with the Slicer 3D software. Technical report, University of Debrecen (2012)Google Scholar
  10. 10.
    The Theano Development Team: Theano: a Python framework for fast computation of mathematical expressions. Technical report (2016)Google Scholar
  11. 11.
    Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Biomedical Imaging Group RotterdamErasmus MC University Medical CenterRotterdamThe Netherlands
  2. 2.Image Group, Department of Computer ScienceUniversity of CopenhagenCopenhagenDenmark

Personalised recommendations