Dear Editor:
The number of articles presenting a neurosurgical prediction model is rapidly increasing [1]. Although the number of publications reporting a clinical prediction model in Child’s Nervous System, Journal of Neurosurgery Pediatrics, and Pediatric Neurosurgery is relatively constant over the years (Fig. 1a), circa two-thirds of these publications have been published since 2015 (Fig. 1b). Nowadays, with the rise of so many prediction models, we should be able to make firm conclusions which model to use on our patients.
Clinical prediction models aim to predict an outcome of interest, for example, survival in high-grade glioma (HGG) patients or intraventricular hemorrhage in preterm infants, by combining two or more patient-related variables. The obtained predictions of these models can then be used for medical and shared decision-making such as initiating surgical treatment, and for example for guidance in planning future lifestyle.
The development and evaluation of clinical prediction models involve multiple methodological steps. It is well-known that these steps are often inadequately addressed and/or inadequately reported in a publication which clearly limits the usefulness of the presented prediction model. Utilizing invalid prediction models may jeopardize adequate decision-making in our daily clinical practice. Therefore, we want to point out a few crucial aspects of prediction models.
- 1.
Sample size—In clinical prediction model studies, the number of events in the study population defines the effective sample size. As a rule of thumb, the minimum required number of 10 events per prognostic variable considered has been generally accepted; although recently, more advanced calculations have been proposed which may yield a different ratio. The number of events is the smaller of the number of patients having the event or not having the event. Thus, if in a set of 110 HGG patients, 50 HGG patients died during the study period, considering up to 5 prognostic variables seems reasonable for the development of a prediction model. Considering too many prognostic variables increases the risk of overfitting [2]. Overfitted models show promising results when evaluated on the patients on which the model was developed but show disappointing results when applied to other sets of patients. This point is essential because many pediatric neurosurgical studies have relatively low effective sample sizes.
- 2.
Validation—It is vital to gauge the validity of the predictions provided by a prediction model. As a result of overfitting, prediction models tend to have a too optimistic predictive performance in terms of discrimination and calibration [2]. Internal validation aims to quantify this optimism. These techniques reuse the same set of patients on which the model was developed. This step is a minimum requirement for publication of a prediction model. External validation assesses the performance of the prediction model on a different set of patients, for example, collected at other geographical locations and/or time periods. External validation is imperative before a clinical uptake of the proposed model can take place.
- 3.
Reporting—Guidelines for reporting prediction models do exist. The TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) guideline together with its explanatory paper provides crucial information for the design, conduction, and evaluation of a prediction model [3, 4]. It is highly recommended to adhere to the items of the TRIPOD checklist for proper and transparent reporting. Often graphical presentations of the underlying statistical model are provided in a publication, but (statistical) details are lacking. For example, when Cox regression is used for predicting survival, the regression coefficients and the baseline survival at a given timepoint should be reported to enable external validation.
We encourage the journal and its readers to critically review studies on prediction models. Published work does not always provide sufficient details of the model, although this is necessary to judge the quality of the prediction model. More details of the presented methodological concerns and others including the selection of candidate prognostic variables and evaluating model performance measures have been recently published and illustrated with clinical examples specifically for neurosurgeons [1]. Furthermore, 7 key steps for the development and evaluation of a neurosurgical clinical prediction model are tabulated for a quick overview. Pediatric neurosurgeons should ideally be aware of this methodology as it is highly consequential to our daily clinical practice.
References
Mijderwijk HJ, Steyerberg EW, Steiger HJ, Fischer I, Kamp MA (2019) Fundamentals of clinical prediction modeling for the neurosurgeon. Neurosurgery 85:302–311. https://doi.org/10.1093/neuros/nyz282
Steyerberg EW (2019) Clinical prediction models. A practical approach to development, validation, and updating. Springer, New York. https://doi.org/10.1007/978-3-030-16399-0
Collins GS, Reitsma JB, Altman DG, Moons KGM (2015) Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. BMC Med 13:1–10. https://doi.org/10.1186/s12916-014-0241-z
Moons KGM, Altman DG, Reitsma JB, Ioannidis JPA, Macaskill P, Steyerberg EW, Vickers AJ, Ransohoff DF, Collins GS (2015) Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med 162:W1–W73. https://doi.org/10.7326/M14-0698
Acknowledgments
We thank Maarten F.M. Engel, an information specialist at Erasmus MC, for performing the literature search.
Funding
Open Access funding provided by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mijderwijk, HJ., Beez, T., Hänggi, D. et al. Clinical prediction models. Childs Nerv Syst 36, 895–897 (2020). https://doi.org/10.1007/s00381-020-04577-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00381-020-04577-8