Key points

  • Most artificial intelligence tools for fracture detection on children have focussed on plain radiographic assessment.

  • Almost all eligible articles used training, validation and test datasets derived from a single institution.

  • Strict inclusion and exclusion criteria for algorithm development may limit the generalisability of AI tools in children.

  • AI performance was marginally higher than human readers, but not significantly significant.

  • Opportunities exist for developing AI tools for very young children (< 2 years old), those with inherited bone disorders and in certain clinical scenarios (e.g. suspected physical abuse).

Background

It is estimated that up to a half of all children sustain a fracture at some point during childhood [1, 2] (~ 133.1 per 10,000 per annum). Fractures also represent a leading cause for long-term disability in children [3] and are present in 55% of children who have been physically abused [4]. Given the differences in children’s bone appearances on imaging compared to adults (including differences at varying stages of bone maturation), and the different patterns of injury (such as buckle/torus fractures, corner metaphyseal injuries, bowing deformities), emergency physicians, who are the frequently the first to review and act upon imaging findings, can miss up to 11% of acute paediatric fractures, compared to a specialist paediatric radiologist [5,6,7,8]. Of these, the majority (7.8%) could lead to adverse events and changes in management [8]. This is particularly concerning given that over half (57%) of all UK paediatric orthopaedic-related litigation cases relate to undetected or incorrectly diagnosed injuries, costing £3.5 million, with an average pay-out of between £28,000 and £57,000 per case [9, 10]. These results are not limited to UK practice, with similar results from Norway [11] and the USA [12, 13], where paediatric claims resulted in higher indemnity paid per case compared with adults [12, 14].

One potential solution would be the use of artificial intelligence (AI) algorithms to rapidly and accurately abnormalities, such as fractures, on medical imaging. Such algorithms could be useful as an interpretative adjunct where specialist opinions are not always available. A systematic review of AI accuracy for adult long bone fracture detection on imaging reported pooled sensitivity and specificity rates of 96 and 94%, respectively [15]. Another systematic review [16] reported that several AI algorithms [17,18,19,20,21] were either as good or better at detecting limb fractures on radiography compared to general physicians and orthopaedic surgeons. Whilst a minority of studies included any paediatric cases within their training dataset for algorithm development [22, 23], few have analysed how well these perform specifically and solely for the paediatric population.

The objectives of this systematic review are to assess the available literature regarding diagnostic performance of AI tools for paediatric fracture assessment on imaging, and where available, how this compares with the performance of human readers.

Materials and methods

Ethical approval was not required for this retrospective review of published data. This study was registered in PROSPERO International prospective register of systematic reviews, CRD42020197279 [24]. The updated PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) statement guidelines were followed [25] (Additional file 1).

Literature review

MEDLINE (Ovid), EMBASE, Web of Science and the Cochrane Library databases were searched for eligible articles published between 1 January 2011 and 31 December 2021 (11 years range), using database specific Boolean search strategies with terms and word variations relating to ‘fracture’, ‘artificial intelligence’, ‘imaging’ and ‘children’. The full search strategy was conducted on 1 January 2022 (Additional file 1: Tables S1–S4). A repeat search was conducted on 18 February 2022 and again on 30th April 2022 to assess for interim publications since the original search.

Eligibility criteria

Inclusion criteria encompassed any work investigating the diagnostic accuracy for classification, prediction or detection of appendicular fractures on any radiological modality in children, using one or more automated or artificial intelligence models. Expert radiological opinion, follow-up imaging or surgical/histopathological findings were all considered acceptable reference standards. Studies were limited to human subjects aged 0–20 years, to include adolescents. No restrictions were placed on method of imaging, dataset size, machine vendor, type of artificial intelligence/computer-aided methodology or clinical setting.

Exclusion criteria included conference abstracts, case reports, editorials, opinion articles, pictorial reviews and multimedia files (online videos, podcasts). Articles without a clear reference standard, clear subgroup reporting (to assess whether a paediatric cohort was analysed) or those relating to robotics or natural language processing (NLP) rather than image analysis were excluded. We excluded any animal studies and those referring to excised bone specimens.

All articles were independently searched by two reviewers (both paediatric radiologists with prior experience of conducting systematic reviews and meta-analyses). Abstracts of suitable studies were examined, and full papers were obtained. References from the retrieved full text articles were manually examined for other possible publications. Disagreements were resolved by consensus.

Methodological quality

Given the lack of quality assessment tools specifically designed for artificial intelligence methodology [26], we used the modified Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) criteria [27] with consideration of several items outlined from the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guideline [28].

These are as follows:

  1. (1)

    Patient Selection, risk of bias: consideration regarding appropriate patient selection for the intended task, collating a balanced data set, suitable data sources, unreasonable/extensive exclusion criteria

  2. (2)

    Patient Selection, applicability: how applicable/useful the algorithm for intended usage, given the patient selection.

  3. (3)

    Index test, risk of bias: consideration of measures of significance and uncertainty in the test;

  4. (4)

    Index test, applicability: information on validation or testing of the algorithm on external data;

  5. (5)

    Reference Standard, risk of bias: sufficient detail to allow replication of ground truth/reference standard, whether reader was blinded to clinical details;

  6. (6)

    Reference Standard, applicability: appropriateness for clinical practice.

This combined assessment using QUADAS-2 and CLAIM has been previously employed by other authors for systematic reviews evaluating artificial intelligence studies [29]. Due to the low number of studies fulfilling our inclusion criteria, it was decided a priori to not exclude any studies on the basis of quality assessment to allow as complete a review of the available literature possible.

Data extraction and quantitative data synthesis

Two reviewers independently extracted data from the full articles into a database (Excel, Microsoft, Redmond WA, USA). A descriptive approach was used to synthesise the extracted data. Information regarding the datasets in terms of the number of images, types of images, and number of diagnostic classes within the data set was collected and recorded. The evaluation metrics (i.e. diagnostic accuracy rates) used in each dataset for each study were described. Due to the heterogeneity of data and body parts assessed, it was planned a priori to provide a narrative description of the results.

Results

Eligible studies

The initial search performed on 1 January 2022 yielded 362 articles, after the removal of duplicate studies. On the basis of study title and abstract, 318 articles were excluded or irretrievable. After review of the full text (n = 44), eight studies were eventually included [17, 30,31,32,33,34,35,36]. An additional search of the medical literature on 18 February 2022 revealed one additional study. A PRISMA flowchart is shown in Fig. 1.

Fig. 1
figure 1

PRISMA flow chart for the study search and selection

Methodological quality assessment

The risk of bias and applicability of the various studies are outlined in Fig. 2. In two studies, there was a high risk of bias and applicability concerns regarding patient selection [32, 35]. In one of these [35], a 3-dimensional ultrasound sweep of the distal radius was performed by medical students on a ‘convenient sample’ of children attending the emergency department with wrist injuries. Patients were neither consecutive, nor randomly sampled; therefore, it was questionable as to how generalisable the study results could be. In the second study [32], children were only included if they had a confirmed lower limb fracture, and were labelled as having either normal fracture healing time or delayed fracture healing (> 12 weeks). The mechanism for follow-up to determine fracture healing time, or the reason for choosing a 12-week time frame, was not specified, and furthermore it was not stated whether children with pre-existing bone fragility disorders were included.

Fig. 2
figure 2

Methodological quality assessment of the included studies using the QUADAS-2 tool. Risk of bias and applicability concerns summary about each domain are shown for each included study

Almost half of all studies had unclear/moderate concerns regarding applicability of patient selection (4/9, 44.4%) [31, 34, 36, 37], and most had concerns regarding applicability of index test (6/9, 66.7%) [31,32,33,34,35,36]. This was predominantly due to studies imposing strict exclusion criteria in their patient selection (e.g. exclusion of patients with healing bones, certain types of fractures, and treatment with cast or surgical correction devices) which would limit the application of the algorithm in clinical practice. In four studies the risk of bias for the reference standard was considered unclear/moderate as the radiology readers were unblinded to the clinical history, which may have influenced their reporting of findings and subsequent algorithm performance [33,34,35]. Only two studies reported results for external validation of their algorithm using a dataset which was distinct to the training and validation datasets [17, 30].

Patient demographics and study setting

The list of studies included, study aims, and patient inclusion/exclusion criteria are provided in Table 1. Patient demographics, type of centre and ground truth/reference levels are covered in Table 2. The majority of the studies (5/9, 55.6%) involved assessment of paediatric upper limb trauma, with three assessing the elbow and two assessing the forearm. One study assessed any fracture of the appendicular skeleton, and the remaining three assessed trauma of the lower limb.

Table 1 Study aims, injury to be detected and patient inclusion/exclusion criteria, organised by publication date
Table 2 Study characteristics for articles included in systematic review, organised by publication date

In three of the studies, children below the age of 1 year were not included in the study dataset and in one study the age range was not provided. In three studies, the gender split of the dataset was not reported, and none of the studies provided details regarding the ethnicity or socio-economic class of the patients.

The majority of studies (8/9, 88.9%) used datasets which were derived from the author’s own institution (i.e. a single centre study), and analysed fractures on plain radiography. Only one study reported the development of an AI algorithm for fracture detection using ultrasound. The ground truth/reference level for fracture assessment was from the radiology report (7/9, 77.8%), the opinion of an orthopaedic surgeon (1/9, 11.1%) and in the one study related to ultrasound assessment, the corresponding plain radiography report acquired within 30 days of the ultrasound acted as the reference standard for presence of forearm fracture.

Imaging dataset sizes

The total datasets within the articles were described in different ways, some in terms of number of patients or number of examinations (where each consisted of multiple images) and some in terms of the total number of images. Datasets ranged from between 30 and 2549 patients; 55–21,456 examinations; and 226–58,817 images. Depending on the aims and objectives of each study, some provided a breakdown of the number of examinations (and the split between normal and abnormal examinations) as well as the number of images allocated to training, validation and testing. Full details are provided in Table 3.

Table 3 Input data demographics and study dataset sizes, organised by publication date

Imaging algorithm methodology

Technical details regarding methodology and hyperparameters used in the computer-aided/ artificial intelligence algorithm development are summarised in the Additional file 1: Table S5.

In one study, a computer-aided detection (CAD) method was used to generate a graphical user interface (GUI) to automatically extract/segment forearm bones on an image, analyse the curvature and determine presence of underlying bowing/buckling fractures [36]. In another study, a commercially available AI product utilising a deep convolutional neural network (Rayvolve®) [30] was employed. The remainder either developed or re-trained existing convolutional neural networks. One study evaluated the use of self-organising maps (SOM) and also convolutional neural networks in the evaluation of fracture healing [32].

In terms of neural network architecture, the commercially available product (Rayvolve®) was based on a RetinaNet architecture [30], two studies based their neural network on the Xception architecture [33, 34] and one study used the ResNet-50 architecture [17]. For the remainder, the neural network architecture was not described in the study.

Algorithm diagnostic accuracy rates

The diagnostic accuracy rates for each study are listed according to body part and also data set (e.g. validation or test set) in Table 4. For the most common paediatric body part assessed (elbow), the algorithms tested on the test dataset achieved sensitivities of 88.9–90.7%, with specificity of 90.9–100%. The only study that evaluated fracture detection rate for the whole appendicular skeleton (across multiple body parts) achieved 92.6% sensitivity and 95.7% specificity [30].

Table 4 Diagnostic accuracy of artificial intelligence algorithms for fracture detection, organised by body parts

In three studies, the performance of the final AI algorithm was tested against independent human readers on the same dataset [17, 31, 35]. The differences in diagnostic accuracy rates are provided in Table 5. England et al. [31] reported their AI algorithm to have a marginally lower diagnostic accuracy rate than a senior emergency medicine trainee in detecting elbow effusions (diagnostic accuracy 90.7% compared to 91.5%), but a greater sensitivity (90.9% versus 84.8%). Zhang et al. [35] reported their AI algorithm to perform better than a paediatric musculoskeletal radiologist in detecting distal radial fractures on ultrasound (92% diagnostic accuracy versus 89%). Choi et al. [17] examined an AI algorithm for supracondylar fracture detection which achieved a greater sensitivity than the summation score of three consultant radiologists (100% versus 95.7%). When this algorithm used as an adjunctive measure for image interpretation, it was able to demonstrate an improved performance for the lowest performing of the three radiologists, with sensitivity rates improving from 95.7% (radiologist acting alone) to 100% (same radiologist with AI assistance). Despite these slight differences in performance across the studies, there was an overlap in the 95% confidence intervals provided suggesting the changes were not statistically significant.

Table 5 Studies comparing artificial intelligence algorithms versus (or combined with) human reader, organised by publication date

Discussion

Almost all published literature relating to AI assessment for acute appendicular fractures in children is based on radiographic interpretation, with fractures of the upper limb (specifically the elbow) being the most common body part assessed. Nearly all articles used training, validation and testing data derived from a single centre, with few performing external validation. When AI tools were compared to the performance of human readers, the algorithms demonstrated comparable diagnostic accuracy rates and in one study improved/augmented the diagnostic performance of a radiologist.

In this review, we focussed on the assessment of computer-aided/artificial intelligence methods for paediatric appendicular fracture detection, given that these are the most commonly encountered fractures in an otherwise healthy paediatric population (accounting for approximately 70–99% of paediatric fractures [37,38,39], with less than 5% of fractures affecting the axial skeleton [40,41,42]). Publications related to the application of computer-aided/AI algorithms for paediatric skull and spine fractures have been described. One developed an AI algorithm for detection of skull fractures in children from plain radiographs [43] (using CT head report as reference standard) and reported high AUC values both on their internal test set (0.922) and external validation set (0.870), with improvements in accuracy of human readers when using AI assistance (compared to without). Whilst demonstrating proof of concept, since most radiology guidelines encourage the use of CT over radiographs for paediatric head trauma [44,45,46], clinical applicability is limited.

In two articles pertaining to spine fractures [48, 49], the authors applied commercially available, semi-automated software tools designed for adults to a paediatric population for the detection of vertebral fractures on plain radiography or dual-energy X-ray absorptiometry (DEXA). They reported low sensitivity for both software (36 and 26%) not sufficiently reliable for vertebral fracture diagnosis. This finding raises an important general issue regarding the need for adequate validation and testing of AI tools in specific patient populations, in this case children, prior to clinical application to avoid potentially detrimental clinical consequences. This was conducted in the current systematic review for one commercially available product (Rayvolve®, AZMed) which demonstrated high diagnostic accuracy rates, particularly for older children (sensitivity 97.1% versus 91.6% for 5–18-year-olds versus 0–4-year-olds; p < 0.001). Whilst other fracture detection products are now commercially available (e.g. BoneView, Gleamer [49]), peer-reviewed publications of such products to date relate only to diagnostic accuracy rates in adults [50] (although paediatric outcomes are available as a conference abstract on the company website [51]).

Most studies in this review specifically chose to develop and apply their AI algorithm for one specific body part, rather than all bones of the paediatric skeleton. Taking the commonest body part for assessment (i.e. the elbow), dedicated algorithms yielded higher diagnostic accuracy rates than the commercially available product for the same body part (which was trained to detect fractures across the entire appendicular skeleton). In this example, the improvement in sensitivity was between 89.5 and 90.7% (for test data, using dedicated algorithms) versus 88% for the generalised tool. Whilst the difference may be small, it could vary across other body parts which we have insufficient dedicated algorithm information for. It will therefore be important to better understand the epidemiology of fractures across different population groups, and whether algorithms that have increased diagnostic accuracies for certain commonly fractured body parts would need to be additionally implemented for certain institutes.

Another aspect highlighted by the present study relates to patient selection, with variable inclusion and exclusion criteria amongst the different studies, a broad range of patient ages (with heterogeneity in bone maturation and mechanisms of injury), with few assessing fractures in children under 2 years (who are more likely to be investigated for suspected physical abuse [52]), or those with inherited bone disorders (e.g. osteogenesis imperfecta). This could be due to fewer children within these categories attending emergency departments to provide the necessary imaging data for training AI models, but the result is that specific paediatric populations may be unintentionally marginalised or poorly served by such new technologies and raises potential ethical considerations about their future usage particularly when performance characteristics are extrapolated beyond the population on which the tool was developed and validated [53]. An example would be an AI tool which could help to evaluate the particular aspects of fractures relating to suspected physical abuse as an adjunct to clinical practice given that many practising paediatric radiologists do not feel appropriately trained or confident in this aspect of imaging assessment [54,55,56,57]. Whilst data are limited, one study did address the topic of using AI for identifying suspected physical abuse through the detection of corner metaphyseal fractures (a specific marker of abuse) [58] with a high diagnostic accuracy. Future studies addressing these patient populations, and with details regarding socio-economic backgrounds of cases used for training data, would be helpful to develop more inclusive and clinically relevant tools. Expanding the topic of fracture assessment to address bone healing and post-orthopaedic complications may be another area for further development given that most articles also excluded cases with healing fractures, presence of casts or indwelling orthopaedic hardware.

With the exception of one study, all methods for developing artificial intelligence for fracture detection identified in this review relied on creating or retraining deep convolutional neural networks with the ability to ‘learn’ features within an image to better provide the most accurate desired output classification. Only one study exclusively adopted a more traditional machine learning method using stricter, rule-based computer-aided detection methods for identifying bowing fractures of the forearm [36]. It is unclear whether using a convolutional neural network was unsuitable or less accurate for the detection of these specific fractures or was not attempted due to lack of capability; however, differences in performance of various methods should be compared within the same dataset in relation to not only performance but also resource requirements/costs and other aspects such as ‘exploitability’ of features used by the algorithm. It is likely that the trend for future AI tools for paediatric fracture detection will include development of single or an ensemble of convolutional neural networks to provide optimal performance. Nonetheless, one should not completely disregard simpler machine learning methods, and consider how they can be best employed given the significant computational power and thus carbon footprint produced from training deep learning solutions, especially in the light of current global efforts for creating a more sustainable environment [59].

Although there are fewer publications relating to AI applications for paediatric fractures than in adult imaging, these data have demonstrated that several solutions are being developed and tested with children in mind. Given the current crisis in the paediatric radiology workforce and restricted access to specialist services [60,61,62,63,64,65], an immediate, accurate fracture reporting service could potentially confer a cost-saving effect [66] and neutralise healthcare inequalities. Nevertheless, there were many limitations to the published literature. For example, health economic analyses and studies assessing whether such algorithms do actually translate into real improvements in patient outcomes are lacking, and it is unclear how generalisable many of the algorithms may be given that most have been tested in a single centre, without external validation and without appropriately powered studies for those that have used multi-reader studies to compare human versus AI performance. Therefore, although this review found that in a subset of the studies the performance of AI algorithms was not significantly different from human performance, this may be due to an under powered sample size. Furthermore, in practice, paediatric radiographs may be interpreted by a range of different healthcare professionals working at different experience levels and with varying subspecialty backgrounds (e.g. general radiologists, paediatric radiologists, musculoskeletal radiologists, paediatricians, orthopaedic surgeons). The current literature only reviews the comparison between AI performance and one kind of healthcare professional. This limits our understanding of who such AI algorithms may best serve and thus how best to implement them.

It should also be recognised that there may be great differences between optimised test performance in validation sets versus the ‘real-world’ impact of implementing such a tool into routine clinical workflows, not only as a consequence of differences/variations in input data, but also usability aspects and pragmatic ability to incorporate such tools into existing workflows. These factors raise questions regarding future widespread implementation and funding of AI solutions as individual hospitals and healthcare systems will require return on their investment at the level of clinical/operational impact rather than pure ‘test performance’[67]. Due to these reasons, it will be necessary for economic analyses and cost and clinical effectiveness studies to be performed to understand whether AI algorithms for fracture detection in children do offer improved benefits.

Improved methods of secure data sharing (possibly with public datasets of paediatric appendicular radiographs) and greater collaboration between hospitals and industrial and academic partners could be beneficial in terms of developing and implementing novel digital tools for paediatric imaging at a lower cost, with future real-world implementation studies. Further research on the topic of AI for paediatric fracture detection should consider aspects that would be helpful to hospital decision-makers, but also consider the uncertainties and bias within test datasets such as the wide age range of patients included, range of different pathologies and injury patterns sustained by children at different stages of maturation which may not all be as accurately evaluated. Improved transparency and subgroup analyses of these, with more robust external validation of emerging and commercially available tools, would provide the necessary evidence for clinicians and hospital managers to better understand whether such technology should be integrated into their own healthcare systems.

There were several limitations to the present study. During the literature review, we included studies that specifically related to paediatric fracture detection. It is possible that some studies may have included children within their population dataset, but did not make this explicit in their abstract or methodology and therefore may have been excluded. Secondly the AI literature is expanding at a rapid rate, and it is likely by the time of publication that newer articles may be available. In order to minimise this effect, an updated review of the literature using the same search strategy was performed immediately before initial article submission and after reviewer resubmission to ensure the timeliness of the findings. We also acknowledge that articles relating to AI applications may be published in open source, but non peer-reviewed research sharing repositories (e.g. arXiv) which were not searched and therefore excluded since only adequately peer-reviewed articles were included. Finally, it proved difficult to consistently extract the required information from the available literature. When assessing for bias, we used a slight adaptation of the QUADAS-2 guideline (whilst future tools are developed [68]) and in some cases the study methodology appeared incomplete or incomprehensible, particularly those written prior to published AI reporting guidelines [69,70,71]. Accordingly, we included the AI algorithm methodology as an Additional file 1 table due to wide variations in reporting making direct comparisons challenging.

Conclusions

In conclusion, this review has provided an overview of the current evidence pertaining to AI applications of paediatric appendicular fracture assessment on imaging. There is a wide heterogeneity in the literature with respect to paediatric age ranges, body parts assessed by AI for fracture detection and limited information on algorithm performance on external validation.

Further work is still required, especially for testing solutions across multiple centres to ensure generalisability, and there are currently opportunities for the development of AI solutions in assessing paediatric musculoskeletal trauma across other imaging modalities outside of plain radiography and in certain at risk fracture populations (e.g. metabolic or brittle bone diseases and suspected child abuse cases). Improved research methodology, particularly by using a multicentric dataset for algorithm training with external validation and real-world evaluation, would help to better understand the impact of these tools for paediatric healthcare.