Key Points

  • This study presents a comprehensive methodology for a systematic review of the current state of Radiology Artificial Intelligence.

  • Detailed characteristics of studies will be collected and analysed, including nature of task, disease, modality, subspecialty and data processing.

  • Subgroup analysis will be performed to highlight differences in design characteristics between task, subspecialty modality and for trends in algorithm use over time.

Introduction

Background

There have been huge advancements in computer vision following the success of Deep Convolutional Neural Networks (CNN) at the 2012 ImageNet challenge [1]. Deep learning is a subset of machine learning, which itself is a subset of artificial intelligence (AI) the border field of how computers mimic human behaviour. The senior author of that seminal AlexNet paper, Geoffrey Hinton, advised in 2016 that we should stop training radiologists, as it was obvious that within 5 years deep learning (DL) would have surpassed them. While there have been major leaps forward in DL powered computer vision as it applies to radiology, the progress in performance has not yet materialised as he predicted. Rather, specific “narrow” applications have proven successful; and generalised superhuman performance remains elusive. Problems such as generalisability, stability and implementation, crucial in the medical field, have seen the clinical application of AI in healthcare lag behind other industries [2]. While recent guidelines have been established to advise on ethics, data management and the potential directions of future research [3,4,5], systematic reviews of the entire field are lacking. Our systematic review aims to look at the radiology AI literature from a task-specific point of view. Many of the roles of the clinical radiologist can be decomposed into tasks commonly faced by computer engineers in related computer vision fields such as segmentation, identification, classification and prediction [6].

Objectives

This systematic review will aim to (1) assess the different methods and algorithms used to tackle these tasks, (2) to examine potential bias in methodology, (3) to consider the quality of data management in the literature and (4) outline trends in all the above.

Methods and analysis

This systematic review has been registered with PROSPERO (registration number: CRD42020154790). We will report this systematic review according to the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guidelines and have completed the PRISMA-P checklist for this protocol (Table 1).

Table 1 PRISMA-P (preferred reporting items for systematic review and meta-analysis protocols) 2015 checklist: recommended items to address in a systematic review protocol

Inclusion/exclusion criteria for the selection of studies

Type of study design, participants

Two separate reviews are proposed, a primary review comprehensive of all literature and a secondary review in the paediatric literature only.

The comprehensive review will include all clinical radiological (not laboratory or phantom-based) deep learning papers that aim to complete a segmentation, identification, classification, or prediction task using computer vision techniques. Human hospital based studies that use computer vision techniques to aid in the care of patients through radiological diagnosis or intervention will be included. The paediatric review will include all machine learning and deep learning tasks as applied to paediatric clinical radiology.

Inclusion criteria

Clinical radiological papers that use DL computer vision techniques to complete a segmentation, identification, classification or prediction task based on radiographic, computed tomography (CT), magnetic resonance (MR), ultrasound (US) or nuclear medicine/molecular or hybrid imaging technique. Where the comparison group is a combined Human–AI performance, this will be specifically recorded.

Exclusion criteria

Functional MRI (fMRI) papers are not included as the techniques used in the computer analysis of fMRI data are quite separate from the computer vision-based tasks that are the subject of the review. To ensure focus on computer vision-based tasks and adequately assess these techniques, "radiomics" papers or those that focus on texture analysis or the identification of imaging biomarkers will be excluded from the primary review. Connectomics papers, quality assessment and decision support papers are not included. Image processing or registration papers are excluded. Image quality papers are excluded from the primary review. Papers solely for use in radiation therapy are also excluded. Non-human or phantom studies are excluded.

Type of intervention

We will not place a restriction on the intervention type and will include trials that study the clinical application of AI to radiology as outlined above.

Search method for the identification of trials

Electronic search

We will perform electronic searches on MEDLINE (Pubmed), EMBASE from 2015 until 31 December 2019. Zotero will be used as our reference manager, and the Revtools package on R will be used to eliminate duplicate records. The search will be conducted in English. The search terms used are reported in Table 2. The artificial intelligence and radiology terms were combined with the AND operator with the addition of the paediatric terms with the AND operator for the paediatric sub-section. Search terms agreed by consensus between the two co-principle investigators with backgrounds in radiology and computer science respectively.

Table 2 Search terms

Selection and analysis of trials

We will review the title and abstracts of studies to identify clinical radiological artificial intelligence studies for inclusion or exclusion. Studies with insufficient information to determine the use of AI computer vision methods will also be included for full-text review. We will then perform a full-text review to confirm studies that will be included in the final systematic review. This process will be summarised in a PRISMA flowchart. Abstract, title and full-text review will be performed by B.K. and S.B. Disagreements will be resolved by consensus or by a third reviewer (R.K.), if necessary.

Before full data extraction, all reviewers will complete the same 5% subsample and review answers to ensure there is a > 90% inter-reviewer agreement. Data extraction will be undertaken by three radiologists, two of whom are nationally certified and have a research interest in artificial intelligence (S.C. and G.H.). The third is a radiology resident in training with 4 years of experience who is a PhD candidate in radiology artificial intelligence.

Three reviewers will extract the following information in parallel and record in a custom database:

  1. 1.

    Country of origin (Paediatric Review only)

  2. 2.

    Radiology subspecialty

  3. 3.

    Retro/prospective

  4. 4.

    Supervised/unsupervised

  5. 5.

    Number of participants

  6. 6.

    Problem to be solved—i.e. segmentation, identification, classification, prediction.

  7. 7.

    Target Condition and body region

  8. 8.

    Reference Standard—Histology, rad report, surgery

  9. 9.

    Method for assessment of standard

  10. 10.

    Type of internal validation

  11. 11.

    External validation

  12. 12.

    Indicator method for predictor measurement, exclusion of poor-quality imaging Heatmap provided? Other explicability?

  13. 13.

    Algorithm—Architecture Transfer learning applied Ensemble architecture used

  14. 14.

    Data source—Number of images for training/tuning, Source of data, Data range, Open-access data

  15. 15.

    Was manual segmentation used?

Information will be extracted using a closed question format with an “add option” function if required. This is intended to maintain consistency while being flexible enough to account for the heterogeneity in the data. Please see Additional file 1. The full questionnaire will be made open access once the review is complete.

Assessment of the quality of the studies: risk of bias

Due to the study design, there will be a high degree of heterogeneity within the study. This has been acknowledged in the literature to date [7]. We will, however, use basic surrogates of risk of bias including inclusion and exclusion criteria, internal or external validation and performance indication to estimate bias.

Data synthesis

We will not perform a meta-analysis as part of this systematic review. A narrative synthesis of the data will be performed.

Analysis by subgroups

We will report overall outcomes and outcomes by task, i.e. segmentation, identification, classification and prediction tasks. Descriptive statistics will be used to illustrate trends in the data.

Study status

This systematic review will start in July 2020. We hope to have our first results in late 2020.

Patient and public involvement

Our research group has engaged with a specific patient group MS Ireland to discuss their ideas, concerns and expectations around the clinical application of AI to radiology and these discussions continue to inform our research decisions.

Ethics and dissemination

Ethical approval was not required for this study. We will publish the results of this systematic review in a peer-reviewed journal.

Discussion

The volume of medical imaging investigations has greatly increased over recent years [8]. The number of clinicians trained in the expert interpretation of these investigations, however, has failed to keep pace in demand [9]. AI has been suggested as one possible solution to this supply/demand issue [8]. A huge volume of research has been published in a short time. Furthermore, the number of reviewers with expertise in both radiology and AI is limited. Standards for publication have only recently been developed [10]. This has the potential for papers of different levels of quality to be published and has the potential to negatively impact on patient care. Furthermore, many of the papers focus on a small range of pathology and tasks which opens the possibility of unnecessary duplication of work.

We anticipate that there will be rapid growth in the number of included papers year-on-year. We also expect that papers will be concentrated in a narrow range of topics. We aim to identify which algorithms are the most popular for particular tasks and also to investigate the presence of unique or custom models compared to off-the-shelf models. The issue of hyperparameter optimisation (whether automated or handcrafted) will also be examined. Statistical analysis will also be a feature of the review with a focus on sample size calculation and performance metrics [11].

We hope the systematic nature of this review will identify smaller papers with proper methods that may have been overlooked as well as highlight papers where some methods may have been suboptimal and provide an evidence base for a framework methodological design.

This review will have potential limitations, including publication and reporting bias. We will not be able to include studies with unpublished data, and we will misclassify studies that do not have clear reporting of adaptive designs in their methodology. Furthermore, the heterogeneity of the included studies will not allow for meaningful meta-analysis of results. The expected high number of included articles (in the range of 1000 articles over the 5 years 2015–2019) will only allow for a high-level overview of certain themes.

Finally, we hope to raise awareness of among the radiology community of the questions being asked as well as the methods being used to answer them with the radiology AI literature and give an overview of techniques for those with an engineering or computer science background looking to contribute to the field.