Background
When we develop or validate a prediction model, we want to quantify how good the predictions from the model are (“model performance”). Predictions are absolute risks, which go beyond assessments of relative risks, such as regression coefficients, odds ratios, or hazard ratios. We can distinguish apparent, internally validated, and externally validated model performance (Chap. 5). For all types of validation, we need performance criteria in line with the research questions, and different perspectives can be chosen. We first take the perspective that we want to quantify how close our predictions are to the actual outcome. Next, more specific questions can be asked about calibration and discrimination properties of the model, which are especially relevant for prediction of binary outcomes in individual patients. We will illustrate the use of performance measures in the testicular cancer case study, with model development in 544 patients, internal validation with bootstrapping, and external validation with 273 patients from another centre.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Steyerberg, E. (2009). Evaluation of performance. In: Clinical Prediction Models. Statistics for Biology and Health. Springer, New York, NY. https://doi.org/10.1007/978-0-387-77244-8_15
Download citation
DOI: https://doi.org/10.1007/978-0-387-77244-8_15
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-77243-1
Online ISBN: 978-0-387-77244-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)