Skip to main content
Log in

Improving the Automatic Classification of Brain MRI Acquisition Contrast with Machine Learning

  • Original Paper
  • Published:
Journal of Digital Imaging Aims and scope Submit manuscript

Abstract

Automated quantification of data acquired as part of an MRI exam requires identification of the specific acquisition of relevance to a particular analysis. This motivates the development of methods capable of reliably classifying MRI acquisitions according to their nominal contrast type, e.g., T1 weighted, T1 post-contrast, T2 weighted, T2-weighted FLAIR, proton-density weighted. Prior studies have investigated using imaging-based methods and DICOM metadata-based methods with success on cohorts of patients acquired as part of a clinical trial. This study compares the performance of these methods on heterogeneous clinical datasets acquired with many different scanners from many institutions. RF and CNN models were trained on metadata and pixel data, respectively. A combined RF model incorporated CNN logits from the pixel-based model together with metadata. Four cohorts were used for model development and evaluation: MS research (n = 11,106 series), MS clinical (n = 3244 series), glioma research (n = 612 series, test/validation only), and ADNI PTSD (n = 477 series, training only). Together, these cohorts represent a broad range of acquisition contexts (scanners, sequences, institutions) and subject pathologies. Pixel-based CNN and combined models achieved accuracies between 97 and 98% on the clinical MS cohort. Validation/test accuracies with the glioma cohort were 99.7% (metadata only) and 98.4 (CNN). Accurate and generalizable classification of MRI acquisition contrast types was demonstrated. Such methods are important for enabling automated data selection in high-throughput and big-data image analysis applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Bitar, R. et al. MR pulse sequences: what every radiologist wants to know but is afraid to ask. Radiographics 26, 513–537 (2006).

    Article  PubMed  Google Scholar 

  2. Nishimura, D. Chapter 1. in Principles of Magnetic Resonance Imaging (2010).

  3. NEMA. DICOM. http://medical.nema.org/. Accessed 30 Nov 2021.

  4. Plewes, D. B. The AAPM/RSNA physics tutorial for residents. Contrast mechanisms in spin-echo MR imaging. Radiographics 14, 1389–404; quiz 1405 (1994).

  5. Nitz, W. R. & Reimer, P. Contrast mechanisms in MR imaging. Eur. Radiol. 9, 1032–1046 (1999).

    Article  CAS  PubMed  Google Scholar 

  6. Wen, P. Y. et al. Updated response assessment criteria for high-grade gliomas: response assessment in neuro-oncology working group. J. Clin. Oncol. 28, 1963–1972 (2010).

    Article  PubMed  Google Scholar 

  7. Willemink, M. J. et al. Preparing medical imaging data for machine learning. Radiology 295, 4–15 (2020).

    Article  PubMed  Google Scholar 

  8. Gauriau, R. et al. A Deep Learning-Based Model for Detecting Abnormalities on Brain MRI for Triaging: Preliminary Results from a Multi-Site Experience. Radiology: Artificial Intelligence e200184 (2021). https://doi.org/10.1148/ryai.2021200184.

  9. Akgül, C. B. et al. Content-based image retrieval in radiology: current status and future directions. J. Digit. Imaging 24, 208–222 (2011).

    Article  PubMed  Google Scholar 

  10. Kumar, A. et al. Adapting content-based image retrieval techniques for the semantic annotation of medical images. Comput. Med. Imaging Graph. 49, 37–45 (2016).

    Article  PubMed  Google Scholar 

  11. Kumar, A., Kim, J., Cai, W., Fulham, M. & Feng, D. Content-based medical image retrieval: a survey of applications to multidimensional and multimodality data. J. Digit. Imaging 26, 1025–1039 (2013).

    Article  PubMed  PubMed Central  Google Scholar 

  12. Gauriau, R. et al. Using DICOM metadata for radiological image series categorization: a feasibility study on large clinical brain MRI datasets. J. Digit. Imaging 33, 747–762 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  13. Remedios, S., Roy, S., Pham, D. L. & Butman, J. A. Classifying magnetic resonance image modalities with convolutional neural networks. in Medical Imaging 2018: Computer-Aided Diagnosis (eds. Mori, K. & Petrick, N.) 89 (SPIE, 2018). https://doi.org/10.1117/12.2293943.

  14. Gai, N. D. Highly Efficient and Accurate Deep Learning-Based Classification of MRI Contrast on a CPU and GPU. J. Digit. Imaging 35, 482–495 (2022).

    Article  PubMed  Google Scholar 

  15. Ranjbar, S. et al. A deep convolutional neural network for annotation of magnetic resonance imaging sequence type. J. Digit. Imaging 33, 439–446 (2020).

    Article  PubMed  Google Scholar 

  16. Pizarro, R. et al. Using deep learning algorithms to automatically identify the brain MRI contrast: implications for managing large databases. Neuroinformatics 17, 115–130 (2019).

    Article  PubMed  Google Scholar 

  17. University of California, San Francisco MS-EPIC Team: et al. Long-term evolution of multiple sclerosis disability in the treatment era. Ann. Neurol. 80, 499–510 (2016).

  18. Cluceru, J. et al. Improving the noninvasive classification of glioma genetic subtype with deep learning and diffusion-weighted imaging. Neuro Oncol. 24, 639–652 (2022).

    Article  CAS  PubMed  Google Scholar 

  19. Cluceru, J. et al. Recurrent tumor and treatment-induced effects have different MR signatures in contrast enhancing and non-enhancing lesions of high-grade gliomas. Neuro Oncol. 22, 1516–1526 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. ADNI. http://adni.loni.usc.edu. Accessed 30 Nov 2021.

  21. MRI Scanning. http://mriquestions.com/what-are-the-steps.html. Accessed 30 Nov 2021.

  22. Jung, B. A. & Weigel, M. Spin echo magnetic resonance imaging. J. Magn. Reson. Imaging 37, 805–817 (2013).

    Article  PubMed  Google Scholar 

  23. mri pulse sequence parameters. https://radiopaedia.org/articles/mri-sequence-parameters. Accessed 30 Nov 2021.

  24. Cluceru, J. et al. Automatic Classification of MR Image Contrast. in (ISMRM, 2020).

  25. Pydicom. https://pydicom.github.io/. Accessed 30 Nov 2021.

  26. DICOM. DICOM MR Image Module. http://dicom.nema.org/medical/dicom/current/output/chtml/part03/sect_C.8.3.html#table_C.8-4. Accessed 30 Nov 2021.

  27. DICOM. DICOM Secondary Capture. http://dicom.nema.org/dicom/2013/output/chtml/part03/sect_A.8.html. Accessed 30 Nov 2021.

  28. Essock-Burns, E. et al. Comparison of DSC-MRI post-processing techniques in predicting microvascular histopathology in patients newly diagnosed with GBM. J. Magn. Reson. Imaging 38, 388–400 (2013).

    Article  PubMed  Google Scholar 

  29. Essig, M. et al. Perfusion MRI: the five most frequently asked technical questions. AJR Am J Roentgenol 200, 24–34 (2013).

    Article  PubMed  PubMed Central  Google Scholar 

  30. hashlib. https://pypi.org/project/hashlib/. Accessed 30 Nov 2021.

  31. scikit-learn. scikit-learn Random Forest Classifier. https://scikit-learn.org/stable/modules/classes.html#module-sklearn.ensemble. Accessed 30 Nov 2021.

  32. scikit-learn. scikit-learn Support Vector Machines. https://scikit-learn.org/stable/modules/classes.html#module-sklearn.svm. Accessed 30 Nov 2021.

  33. scikit-learn. scikit-learn Model Selection. https://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection. Accessed 30 Nov 2021.

  34. scikit-learn. scikit-learn Inspection. https://scikit-learn.org/stable/modules/classes.html#module-sklearn.inspection. Accessed 30 Nov 2021.

  35. scikit. Permutation feature importance. Permutation feature importance https://scikit-learn.org/stable/modules/permutation_importance.html. Accessed 30 Nov 2021.

  36. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016). https://doi.org/10.1109/CVPR.2016.90.

  37. PyTorch, P. PyTorch ResNet50. (PyTorch, 2021).

  38. ImageNet. ImageNet. ImageNet http://www.image-net.org/. Accessed 30 Nov 2021.

  39. PyTorch. PyTorch TorchVision.Models. Torchvision.Models https://pytorch.org/vision/stable/models.html. Accessed 30 Nov 2021.

  40. van der Maaten, L. & Hinton, G. Visualizing Data using t-SNE. Journal of Machine Learning Research 9, 2579–2605 (2008).

    Google Scholar 

  41. scikit-learn TSNE. https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html. Accessed 30 Nov 2021.

  42. UCSF Bridge. https://bridge.ucsf.edu/. Accessed 30 Nov 2021.

  43. Gourraud, P.-A. et al. Precision medicine in chronic disease management: The multiple sclerosis BioScreen. Ann. Neurol. 76, 633–642 (2014).

    Article  PubMed  PubMed Central  Google Scholar 

  44. Adeel Azam, M., Bahadar Khan, K., Ahmad, M. & Mazzara, M. Multimodal medical image registration and fusion for quality enhancement. Computers, Materials & Continua 68, 821–840 (2021).

  45. Azam, M. A. et al. A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Comput. Biol. Med. 144, 105253 (2022).

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wpcontent/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf

Funding

Julia Cluceru was supported in part by T32 grant, P01CA118816. Riley Bove is supported by a National Multiple Sclerosis Society Harry Weaver Award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jason C. Crane.

Ethics declarations

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cluceru, J., Lupo, J.M., Interian, Y. et al. Improving the Automatic Classification of Brain MRI Acquisition Contrast with Machine Learning. J Digit Imaging 36, 289–305 (2023). https://doi.org/10.1007/s10278-022-00690-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10278-022-00690-z

Keywords

Navigation