Abstract
At the beginning of the previous century, it was traditional among engineers and physicists to perform an error analysis of both measurements and computations (Dubbel 1939). The enormous computational capabilities, that became available since, enabled the development of complex computer models. They make use of numerous parameters, various sub-models and large numbers of input data subject to uncertainty that is, however, frequently ignored. One of the reasons in the past was the difficulty, if not impossibility, of performing an uncertainty analysis of the results from computationally demanding models. Combining powerful statistical methods, as described in the following chapters, with today’s computational capabilities opens the door to uncertainty analysis as a standard procedure.
This chapter leads through the developmental stages of a computer model and points out their sources of epistemic uncertainties. The difference between “epistemic uncertainty” and “aleatoric uncertainty” is explained. It necessitates the use of two different interpretations of “probability” for their quantification. Epistemic uncertainty is quantified using subjective probability. The need for separation of epistemic and aleatoric uncertainties arises from the type of assessment question that is to be answered by the model result. This is explained and illustrated by practical examples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aven, T. (2010). Some reflections on uncertainty analysis and management. Reliability Engineering & System Safety, 95, 195–201.
Dubbel, H. (Ed.). (1939). Taschenbuch für den Maschinenbau. Berlin: Springer.
Glorennec, P. (2006). Analysis and reduction of the uncertainty of the assessment of children’s lead exposure around an old mine. Environmental Research, 100, 150–158.
Hofer, E. (1996). When to separate uncertainties and when not to separate. Reliability Engineering and System Safety, 54, 113–118.
Hoffman, F. O., & Hammonds, J. S. (1994). Propagation of uncertainty in risk assessments: The need to distinguish between uncertainty due to lack of knowledge and uncertainty due to variability. Risk Analysis, 14(5), 707–712.
Joint Committee for Guides in Metrology. (2008). Evaluation of measurement data – Guide to the expression of uncertainty in measurement (1st ed.). JCGM 100 (GUM 1995 with minor corrections).
Koch, F. H., et al. (2009). Evaluating critical uncertainty thresholds in a spatial model of forest pest invasion risk. Risk Analysis, 29(9), 1227–1241.
Ragas, A. M. J., et al. (2008). Separation of uncertainty and inter-individual variability in human exposure modeling. Journal of Exposure Science and Environmental Epidemiology, 19, 201–212.
Sanchez, A., et al. (2009). Addressing imperfect maintenance modeling uncertainty in unavailability and cost based optimization. Reliability Engineering and System Safety, 94, 22–32.
Warren-Hicks, W. J., & Hart, A. (2010). Application of uncertainty analysis to ecological risks of pesticides. Boca Raton, FL: CRC Press.
Weathers, J. B., et al. (2009). An exercise in model validation: Comparing univariate statistics and Monte Carlo based multivariate statistics. Reliability Engineering and System Safety, 94, 1695–1702.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Hofer, E. (2018). Introduction and Necessary Distinctions. In: The Uncertainty Analysis of Model Results. Springer, Cham. https://doi.org/10.1007/978-3-319-76297-5_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-76297-5_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-76296-8
Online ISBN: 978-3-319-76297-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)