Risk Prediction Models in Perioperative Medicine: Methodological Considerations
- 285 Downloads
Purpose of Review
Risk prediction models hold enormous potential for assessing surgical risk in a standardized, objective manner. Despite the vast number of risk prediction models developed, they have not lived up to their potential. The aim of this paper is to provide an overview of the methodological issues that should be considered when developing and validating a risk prediction model to ensure a useful, accurate model.
Systematic reviews examining the methodological and reporting quality of these models have found widespread deficiencies that limit their usefulness.
Risk prediction modelling is a growing field that is gaining huge interest in the era of personalized medicine. Although there are no shortcuts and many challenges are faced when developing and validating accurate, useful prediction models, these challenges are surmountable, if the abundant methodological and practical guidance available is used correctly and efficiently.
KeywordsRisk prediction Discrimination Calibration Multivariable Statistical methods
Jennifer De Beyer has received research funding through a Grant from Cancer Research UK.
Compliance with Ethical Guidelines
Conflict of Interest
Authors declares that they have no conflict of intrerst.
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors.
Papers of particular interest, published recently, have been highlighted as • Of importance •• Of major importance
- 12.Kleinrouweler CE, Cheong-See FM, Collins GS, et al. Prognostic models in obstetrics: available, but far from applicable. Am J Obstet Gynecol 2016;214(1):79–90 e36.Google Scholar
- 16.• Collins GS, de Groot JA, Dutton S, et al. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting. BMC Med Res Methodol. 2014;14:40. Provides an overview of the conduct and reporting of external validation studies. Google Scholar
- 27.• Moons KG, de Groot JA, Bouwmeester W, et al. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist. PLoS Med. 2014;11(10):e1001744. Provides a framework and gives guidance for conducting systematic reviews of prediction model studies. Google Scholar
- 29.• Peat G, Riley RD, Croft P, et al. Improving the transparency of prognosis research: the role of reporting, data sharing, registration, and protocols. PLoS Med. 2014;11:e1001671. An article stressing the importance of planning prediction model studies and if possible to register the study and publish the study protocol. Google Scholar
- 36.Hickey GL, Blackstone EH. External model validation of binary clinical risk prediction models in cardiovascular and thoracic surgery. J Thorac Cardiovasc Surg. 2016. doi: 10.1016/j.jtcvs.2016.04.023.
- 37.Kattan MW, Hess KR, Amin MB, et al. American Joint Committee on Cancer acceptance criteria for inclusion of risk models for individualized prognosis in the practice of precision medicine. CA Cancer J Clin. 2016. doi: 10.3322/caac.21339.
- 38.Wynants L, Collins GS, van Calster B. Key steps and common pitfalls in developing and validating risk models: a review. BJOG Int J Obst Gynaecol. 2016 (in press).Google Scholar
- 39.•• Moons KGM, Altman DG, Reitsma JB, et al. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med. 2015;162(1):W1–W73. Provides methodological guidance on information to report when publishing a prediction model study. Google Scholar
- 41.• Steyerberg EW. Clinical prediction models: a practical approach to development, validation, and updating. New York: Springer; 2009. Key text book on various aspects on prediction modelling. Google Scholar
- 42.Groenwold RH, Moons KG, Pajouheshnia R, et al. Explicit inclusion of treatment in prognostic modelling was recommended in observational and randomised settings. J Clin Epidemiol. 2016. doi: 10.1016/j.jclinepi.2016.03.017.
- 49.Ogundimu EO, Altman D, G., Collins GS. Simulation study finds adequate sample size for developing prediction models is not simply related to events per variable. J Clin Epidemiol. 2016 (in press).Google Scholar
- 57.Little RJA, Rubin DB. Statistical analysis with missing data, vol. 2nd. Hoboken, NJ: Wiley; 2002.Google Scholar
- 67.• Collins GS, Ogundimu EO, Cook JA, Le Manach Y, Altman DG. Quantifying the impact of different approaches for handling continuous predictors on the performance of a prognostic model. Stat Med. 2016. An article illustrating the loss of predictive accuracy when continuous measurements are categorised. Google Scholar
- 79.van Buuren S, Oudshoorn CGM: Multivariate imputation by chained equations: MICE V1.0 User’s Manual, vol. PG/VGZ/00.038. Leiden: TNO Preventie en Gezonheid; 2000.Google Scholar
- 83.• Steyerberg EW, Vickers AJ, Cook NR, et al. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology 2010;21(1):128–138. Paper describing many of the key performance measures to calculate when validating a prediction model. Google Scholar
- 84.•• Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. Ann Intern Med. 2015;162(1):55–63. Key paper on issues to report when publishing a study developing or validating a prediction model. Google Scholar