Dear Editor:

The number of articles presenting a neurosurgical prediction model is rapidly increasing [1]. Although the number of publications reporting a clinical prediction model in Child’s Nervous System, Journal of Neurosurgery Pediatrics, and Pediatric Neurosurgery is relatively constant over the years (Fig. 1a), circa two-thirds of these publications have been published since 2015 (Fig. 1b). Nowadays, with the rise of so many prediction models, we should be able to make firm conclusions which model to use on our patients.

Fig. 1
figure 1

a, b Time trends of the published number of pediatric neurosurgical clinical prediction models according to the following systematic searches in PubMed until December 31, 2019. For the black results: (Prognostic-index* OR Prognostic-rule* OR Prognostic-model* OR prognostic-scor* OR prediction-index* OR prediction-rule* OR prediction-model* OR prediction-scor* OR predictive-index* OR predictive-rule* OR predictive-model* OR predictive-scor*) AND (Neurosurgery [mh] OR Neurosurgical Procedures [mh] OR Neurosurg* OR Neurological-surg*) AND (“child”[mh] OR “Infant”[mh] OR “Adolescent”[mh] OR “Minors”[mh] OR “Pediatrics”[mh] OR “Child Health Services”[mh] OR “Hospitals, Pediatric”[mh] OR “Intensive Care Units, Pediatric”[Mesh] OR infan*[tiab] OR newborn*[tiab] OR new born*[tiab] OR baby [tiab] OR babies [tiab] OR neonat*[tiab] OR perinat*[tiab] OR postnat*[tiab] OR prematur*[tiab] OR pre-matur*[tiab] OR child [mesh] OR child [tiab] OR child’s [tiab] OR childhood*[tiab] OR children*[tiab] OR kid [tiab] OR kids [tiab] OR toddler*[tiab] OR adoles*[tiab] OR teen*[tiab] OR boy*[tiab] OR girl*[tiab] OR minors*[tiab] OR underag*[tiab] OR under age*[tiab] OR under aging [tiab] OR under ageing [tiab] OR juvenil*[tiab] OR youth*[tiab] OR kindergar*[tiab] OR puber*[tiab] OR pubescen*[tiab] OR prepubescen*[tiab] OR prepuberty*[tiab] OR pediatric*[tiab] OR peadiatric*[tiab] OR schoolchild*[tiab] OR preschool*[tiab] OR highschool*[tiab] OR suckling*[tiab] OR PICU [tiab] OR NICU [tiab] OR PICUs [tiab] OR NICUs [tiab]). For the colored results: (Prognostic-index* OR Prognostic-rule* OR Prognostic-model* OR prognostic-scor* OR prediction-index* OR prediction-rule* OR prediction-model* OR prediction-scor* OR predictive-index* OR predictive-rule* OR predictive-model* OR predictive-scor*) AND (Neurosurgery [mh] OR Neurosurgical Procedures [mh] OR Neurosurg* OR Neurological-surg*) AND (“Childs Nerv Syst”[Journal] OR “Journal of Neurosurgery Pediatrics”[Journal] OR “Pediatric Neurosurgery”[Journal])

Clinical prediction models aim to predict an outcome of interest, for example, survival in high-grade glioma (HGG) patients or intraventricular hemorrhage in preterm infants, by combining two or more patient-related variables. The obtained predictions of these models can then be used for medical and shared decision-making such as initiating surgical treatment, and for example for guidance in planning future lifestyle.

The development and evaluation of clinical prediction models involve multiple methodological steps. It is well-known that these steps are often inadequately addressed and/or inadequately reported in a publication which clearly limits the usefulness of the presented prediction model. Utilizing invalid prediction models may jeopardize adequate decision-making in our daily clinical practice. Therefore, we want to point out a few crucial aspects of prediction models.

  1. 1.

    Sample size—In clinical prediction model studies, the number of events in the study population defines the effective sample size. As a rule of thumb, the minimum required number of 10 events per prognostic variable considered has been generally accepted; although recently, more advanced calculations have been proposed which may yield a different ratio. The number of events is the smaller of the number of patients having the event or not having the event. Thus, if in a set of 110 HGG patients, 50 HGG patients died during the study period, considering up to 5 prognostic variables seems reasonable for the development of a prediction model. Considering too many prognostic variables increases the risk of overfitting [2]. Overfitted models show promising results when evaluated on the patients on which the model was developed but show disappointing results when applied to other sets of patients. This point is essential because many pediatric neurosurgical studies have relatively low effective sample sizes.

  2. 2.

    Validation—It is vital to gauge the validity of the predictions provided by a prediction model. As a result of overfitting, prediction models tend to have a too optimistic predictive performance in terms of discrimination and calibration [2]. Internal validation aims to quantify this optimism. These techniques reuse the same set of patients on which the model was developed. This step is a minimum requirement for publication of a prediction model. External validation assesses the performance of the prediction model on a different set of patients, for example, collected at other geographical locations and/or time periods. External validation is imperative before a clinical uptake of the proposed model can take place.

  3. 3.

    Reporting—Guidelines for reporting prediction models do exist. The TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) guideline together with its explanatory paper provides crucial information for the design, conduction, and evaluation of a prediction model [3, 4]. It is highly recommended to adhere to the items of the TRIPOD checklist for proper and transparent reporting. Often graphical presentations of the underlying statistical model are provided in a publication, but (statistical) details are lacking. For example, when Cox regression is used for predicting survival, the regression coefficients and the baseline survival at a given timepoint should be reported to enable external validation.

We encourage the journal and its readers to critically review studies on prediction models. Published work does not always provide sufficient details of the model, although this is necessary to judge the quality of the prediction model. More details of the presented methodological concerns and others including the selection of candidate prognostic variables and evaluating model performance measures have been recently published and illustrated with clinical examples specifically for neurosurgeons [1]. Furthermore, 7 key steps for the development and evaluation of a neurosurgical clinical prediction model are tabulated for a quick overview. Pediatric neurosurgeons should ideally be aware of this methodology as it is highly consequential to our daily clinical practice.