Processes that are not fully understood, and whose outcomes cannot be precisely predicted, are often called uncertain. Most of the inputs to, and processes that occur in, and outputs resulting from water resource systems are not known with certainty. Hence so too are the future impacts of such systems, and even people’s reactions and responses to these impacts. Ignoring this uncertainty when performing analyses in support of decisions involving the development and management of water resource systems could lead to incorrect conclusions, or at least more surprises, than will a more thorough analysis taking into account these uncertainties. This chapter introduces some commonly used approaches for dealing with model input and output uncertainty. Further chapters incorporate these tools in more detailed optimization , simulation, and statistical models designed to identify and evaluate alternative plans and policies for water resource system development and operation.

6.1 Introduction

Uncertainty is always present when planning and operating water resource systems. It arises because many factors that affect the performance of water resource systems are not and cannot be known with certainty when a system is planned, designed, built, and operated. The success and performance of each component of a system often depends on future meteorological, demographic, social, technical, and political conditions, all of which may influence future benefits, costs, environmental impacts, and social acceptability. Uncertainty also arises due to the stochastic (random over time) nature of meteorological and hydrological processes such as rainfall and evaporation . Similarly, future populations of towns and cities, per capita water usage rates, irrigation patterns, and priorities for water uses, all of which impact water demand, are not known with certainty. This chapter introduces methods for describing and dealing with uncertainty, and provides some simple examples of their use in water resources planning. These methods are extended in the following two chapters.

There are many ways to deal with uncertainty. The simplest approach is to replace each uncertain quantity either by its expected or average value or by some critical (e.g., “worst-case”) value and then proceed with a deterministic approach. Use of expected values or alternatively median values of uncertain quantities can be adequate if the uncertainty or variation in a quantity is reasonably small and does not critically affect the performance of the system. If expected values of uncertain parameters or variables are used in a deterministic model , the planner can then assess the importance of uncertainty with sensitivity and uncertainty analyses , discussed later in this and subsequent chapters.

Replacement of uncertain quantities by either expected or worst-case values can adversely affect the evaluation of project performance when important parameters are highly variable. To illustrate these issues, consider the evaluation of the recreation potential of a reservoir. Table 6.1 shows that the elevation of the water surface varies from year to year depending on the inflow and demand for water. The table indicates the pool levels and their associated probabilities as well as the expected use of the recreation facility with different pool levels.

Table 6.1 Data for determining reservoir recreation potential

The average pool level \( \overline{L} \) is simply the sum of each possible pool level times its probability, or

$$ \begin{aligned}\overline{L} & = 10(0.10) + 20(0.25) + 30(0.30)\\& \quad + 40(0.25) + 50(0.10) = 30\end{aligned} $$
(6.1)

This pool level corresponds to 100 visitor-days per day

$$ {\text{VD}}(\overline{L}) = 100\,{\text{visitor-}} {\text{days}}\,{\text{per}}\,{\text{day}} $$
(6.2)

A worst-case analysis might select a pool level of 10 as a critical value , yielding an estimate of system performance equal to 25 visitor-days per day

$$ {\text{VD}}(L_{\text{low}} ) = {\text{VD}}(10) = 25\,{\text{visitor-}} {\text{days}}\,{\text{per}}\,{\text{day}} $$
(6.3)

Neither of these values is a good approximation of the average visitation rate, which is

$$ \begin{aligned} \overline{\text{V}}\overline{\text{D}} & = 0.10{\text{VD}}(10) + 0.25{\text{VD}}(20) + 0.30{\text{VD}}(30)\\& \quad + 0.25{\text{VD}}(40) + 0.10{\text{VD}}(50) \\ & = 0.10(25) + 0.25(75) + 0.30(100) \\& \quad + 0.25(80) + 0.10(70) \\ & = 78.25\,{\text{visitor-}} {\text{days}}\,{\text{per}}\,{\text{day}} \\ \end{aligned} $$
(6.4)

Clearly, the average visitation rate, \( \overline{\text{V}}\, \overline{\text{D}} = 78.25 \), the visitation rate corresponding to the average pool level VD(\( \overline{L} \)) = 100, and the worst-case assessment VD(L low) = 25, are very different.

The median and the most likely are other measures that characterize a data set. They have the advantage that they are less influenced by extreme outliers. For the symmetric data set shown in Table 6.1, the median, most likely, and the mean are the same, namely 30. But if instead the probabilities of the respective pool levels were 0.30, 0.25, 0.20, 0.15, and 0.10, (instead of 0.10, 0.25, 0.30, 0.25, 0.10) the expected value or mean is 25, the value having the highest probability of occurring (the most likely) is 10, and the median or value that is greater or equal to half of the other values and less than or equal to the other half of the values in the data set is 20.

Thus using only average values in a complex model can produce a poor representation of both the average performance and the possible performance range. When important quantities are uncertain, one should evaluate both the expected performance of a project and the risk and possible magnitude of project failures and their consequences.

This chapter reviews many of the methods of probability and statistics that are useful in water resources planning and management. Section 6.2 reviews the important concepts and methods of probability and statistics that are needed in this and other chapters of this book. Section 6.3 introduces several probability distributions that are often used to model or describe uncertain quantities. The section also discusses methods for fitting these distributions using historical information, and methods of assessing whether the distributions are adequate representations of the data. Sections 6.4, 6.5, and 6.6 expand upon the use of these distributions, and discuss alternative parameter estimation methods.

Section 6.7 presents the basic ideas and concepts of stochastic processes or time series . These are used to model streamflows, rainfall, temperature, or other phenomena whose values change with time. The section contains a description of Markov chains , a special type of stochastic process used in many stochastic optimization and simulation models . Section 6.8 illustrates how synthetic flows and other time series inputs can be generated for stochastic simulations . The latter is introduced with an example in Sect. 6.9.

Many topics receive only brief treatment here. Readers desiring additional information should consult applied statistical texts such as Benjamin and Cornell (1970), Haan (1977), Kite (1988), Stedinger et al. (1993), Kottegoda and Rosso (1997), Ayyub and McCuen (2002), and Pishro-Nik (2014).

6.2 Probability Concepts and Methods

This section introduces basic concepts of probability and statistics. These are used throughout this chapter and in later chapters in the book.

6.2.1 Random Variables and Distributions

A basic concept in probability theory is that of the random variable. A random variable is a function whose value cannot be predicted with certainty. Examples of random variables are (1) the number of years until the flood stage of a river washes away a small bridge, (2) the number of times during a reservoir’s life that the level of the pool will drop below a specified level, (3) the rainfall depth next month, and (4) next year’s maximum flow at a gage site on an unregulated stream. The values of all of these quantities depend on events that are not knowable before the event has occurred. Probability can be used to describe the likelihood these random variables will equal specific values or be within a given range of specific values.

The first two examples illustrate discrete random variables, random variables that take on values in a discrete set (such as the positive integers). The second two examples illustrate continuous random variables. Continuous random variables take on values in a continuous set. A property of all continuous random variables is that the probability that they equal any specific number is zero. For example, the probability that the total rainfall depth in a month will be exactly 5.0 cm is zero, while the probability that the total rainfall will lie between 4 and 6 cm can be nonzero. Some random variables are combinations of continuous and discrete random variables.

Let X denote a random variable and x a possible value of that random variable X. Random variables are generally denoted by capital letters and particular values they take on by lowercase letters. For any real-valued random variable X, its cumulative distribution function F X (x), often denoted as just the cdf, equals probability that the value of X is less than or equal to a specific value or threshold x

$$ F_{X} (x) = \Pr[X \le x] $$
(6.5)

This cumulative distribution function F X (x) is a non-decreasing function of x because

$$ \Pr[X \le x] \le \Pr [X \le x + {\delta }]\quad {\text{for}}\;{\delta } > 0 $$
(6.6)

In addition,

$$ \mathop {\lim }\limits_{{x \to {+ \infty} }} F_{X} (x) = 1 $$
(6.7)

and

$$ \mathop {\lim }\limits_{{x \to {- \infty} }} F_{X} (x) = 0 $$
(6.8)

The first limit equals 1 because the probability that X takes on some value less than infinity must be unity; the second limit is zero because the probability that X takes on no value must be zero.

If X is a real-valued discrete random variable that takes on specific values x 1, x 2, …, the probability mass function p X (x i ) is the probability X takes on the value x i . Thus one would write

$$ p_{X} (x_{i} ) = \Pr [X = x_{i} ] $$
(6.9)

The value of the cumulative distribution function F X (x) for a discrete random variable is the sum of the probabilities of all x i that are less than or equal to x.

$$ F_{X} (x) = \sum\limits_{{x_{i} \le x}} {p_{X} (x_{i} )} $$
(6.10)

Figure 6.1 illustrates the probability mass function p X (x i ) and the cumulative distribution function of a discrete random variable.

Fig. 6.1
figure 1

Cumulative distribution and probability density or mass functions of random variables : a continuous distributions; b discrete distributions

The probability density function f X (x) for a continuous random variable X is the analogue of the probability mass function of a discrete random variable. The probability density function, often called the pdf, is the derivative of the cumulative distribution function so that

$$ f_{X} (x) = \frac{{{\text{d}}F_{X} (x)}}{{{\text{d}}x}} \ge 0 $$
(6.11)

The area under a probability density function always equals 1.

$$ \int\limits_{ - \infty }^{ + \infty } {f_{X} (x) = 1} $$
(6.12)

If a and b are any two constants, the cumulative distribution function or the density function may be used to determine the probability that X is greater than a and less than or equal to b where

$$ \Pr \left[ {a < X \le b} \right] = F_{X} (b) - F_{X} (a) = \int\limits_{a}^{b} {f_{X} (x){\text{d}}x} $$
(6.13)

The probability density function specifies the relative frequency with which the value of a continuous random variable falls in different intervals.

Life is seldom so simple that only a single quantity is uncertain. Thus, the joint probability distribution of two or more random variables can also be defined. If X and Y are two continuous real-valued random variables, their joint cumulative distribution function is

$$ F_{XY} (x,y) = \Pr [X \le x\;{\text{and}}\;Y \le y] = \int\limits_{ - \infty }^{x} {\int\limits_{ - \infty }^{y} {f_{XY} (u,v){\text{d}}u{\text{d}}v} } $$
(6.14)

If two random variables are discrete, then

$$ F_{XY} (x, y) = \sum\limits_{{x_{i} \le x}} {\sum\limits_{{y_{i} \le y}} {p_{XY} (x_{i} ,y_{i} )} } $$
(6.15)

where the joint probability mass function is

$$ p_{XY} (x_{i} ,y_{i} ) = \Pr \left[ {X = x_{i} ,\,{\text{and}}\,Y = y_{i} } \right] $$
(6.16)

If X and Y are two random variables , and the distribution of X is not influenced by the value taken by Y, and vice versa, the two random variables are said to be independent. Independence is an important and useful idea when attempting to develop a model of two or more random variables . For independent random variables

$$ \Pr [a \le X \le b\;{\text{and}}\;c \le Y \le d] = \Pr [a \le X \le b]\Pr [c \le Y \le d] $$
(6.17)

for any a, b, c, and d. As a result,

$$ F_{XY} (x,y) = F_{X} (x)F_{Y} (y) $$
(6.18)

which implies for continuous random variables that

$$ f_{XY} (x,y) = f_{X} (x)f_{Y} (y) $$
(6.19)

and for discrete random variables that

$$ p_{XY} ( x,y ) = p_{X} ( x )p_{Y} ( y ) $$
(6.20)

Other useful concepts are those of the marginal and conditional distributions . If X and Y are two random variables whose joint cumulative distribution function F XY (x, y) has been specified, then F X (x), the marginal cumulative distribution of X, is just the cumulative distribution of X ignoring Y. The marginal cumulative distribution function of X equals

$$ F_{X} ( x ) = \Pr [X \le x] = \mathop {\lim }\limits_{y \to \infty } F_{XY} (x,y) $$
(6.21)

where the limit is equivalent to letting Y take on any value. If X and Y are continuous random variables, the marginal density of X can be computed from

$$ f_{X} (x) = \int\limits_{ - \infty }^{ + \infty } {f_{XY} (x,y) {\text{d}}y} $$
(6.22)

The conditional cumulative distribution function is the cumulative distribution function for X given that Y has taken a particular value y. Thus the value of Y may have been observed and one is interested in the resulting conditional distribution , for the so far unobserved value of X. The conditional cumulative distribution function for continuous random variables is given by

$$ F_{X|Y} (x|y) = \Pr [X \le x|Y = y] = \frac{{\int\nolimits_{ - \infty }^{x} {f_{xy} (s, y){\text{d}}s} }}{{f_{y} (y)}} $$
(6.23)

It follows that the conditional density function is

$$ f_{X|Y} (x|y) = \frac{{f_{XY} (x, y)}}{{f_{Y} (y)}} $$
(6.24)

For discrete random variables , the probability of observing X = x, given that Y = y equals

$$ p_{X|Y} (x|y) = \frac{{p_{xy} (x, y)}}{{p_{y} (y)}} $$
(6.25)

These results can be extended to more than two random variables. See Kottegoda and Rosso (1997) for a more advanced discussion.

6.2.2 Expected Values

Knowledge of the probability density function of a continuous random variable, or of the probability mass function of a discrete random variable , allows one to calculate the expected value of any function of the random variable . Such an expectation may represent the average rainfall depth , average temperature , average demand shortfall, or expected economic benefits from system operation. If g is a real-valued function of a continuous random variable X, the expected value of g(X) is

$$ E[g(X)] = \int\limits_{ - \infty }^{ + \infty } {g(x)f_{X} (x) {\text{d}}x} $$
(6.26)

whereas for a discrete random variable

$$ E[g(X)] = \sum\limits_{{x_{i} }} {g(x_{i} )p_{X} (x_{i} )} $$
(6.27)

E[ ] is called the expectation operator. It has several important properties. In particular, the expectation of a linear function of X is a linear function of the expectation of X. Thus if a and b are two nonrandom constants,

$$ E[a + bX] = a + bE[X] $$
(6.28)

The expectation of a function of two random variables is given by

$$ E[g(X,Y)] = \int\limits_{ - \infty }^{ + \infty } {\int\limits_{ - \infty }^{ + \infty } {g(x,y)f_{XY} (x,y){\text{d}}x\,{\text{d}}y} } $$

or

$$ E[g(X, Y)] = \sum\limits_{i} {\sum\limits_{j} {g(x_{i}, y_{i} )p_{XY} (x_{i}, y_{i} )} } $$
(6.29)

If X and Y are independent, the expectation of the product of a function h(·) of X and a function g(·) of Y is the product of the expectations

$$ E [ {g(X)h(Y)} ] = E[ {g(X)} ]E[ {h(Y)} ] $$
(6.30)

This follows from substitution of Eqs. 6.19 and 6.20 into Eq. 6.29.

6.2.3 Quantiles , Moments, and Their Estimators

While the cumulative distribution function provides a complete specification of the properties of a random variable , it is useful to use simpler and more easily understood measures of the central tendency and range of values that a random variable may assume. Perhaps the simplest approach to describing the distribution of a random variable is to report the value of several quantiles. The pth quantile of a random variable X is the smallest value x p such that X has a probability p of assuming a value equal to or less than x p

$$ \Pr [X < x_{p} ] \le p \le {\Pr} [X \le x_{p} ] $$
(6.31)

Equation 6.31 is written to insist if at some point x p , the cumulative probability function jumps from less than p to more than p, then that value x p will be defined as the pth quantile even though F X (x p ) ≠ p. If X is a continuous random variable, then in the region where f X (x) > 0, the quantiles are uniquely defined and are obtained by solution of

$$ F_{X} \left( {x_{p} } \right) = p $$
(6.32)

Frequently reported quantiles are the median x 0.50 and the lower and upper quartiles x 0.25 and x 0.75. The median describes the location or central tendency of the distribution of X because the random variable is, in the continuous case, equally likely to be above as below that value. The interquartile range [x 0.25, x 0.75] provides an easily understood description of the range of values that the random variable might assume. The pth quantile is also the 100 p percentile.

In a given application, particularly when safety is of concern, it may be appropriate to use other quantiles. In floodplain management and the design of flood control structures, the 100-year flood x 0.99 is often the selected design value. In water quality management, a river’s minimum seven-day-average low flow expected once in 10 years is often used as the critical planning value: Here the one-in-ten year value is the 10 percentile of the distribution of the annual minima of the seven-day average flows.

The natural sample estimate of the median x 0.50 is the median of the sample. In a sample of size n where x (1) ≤ x (2) ≤ ··· ≤ x (n) are the observed observations ordered by magnitude, and for a nonnegative integer k such that n = 2k (even) or n = 2k + 1 (odd), the sample estimate of the median is

$$ \hat{x}_{0.50} = \left\{ {\begin{array}{*{20}l} {x_{(k + 1)} } \hfill & {{\text{for}}\;n = 2k + 1} \hfill \\ {\frac{1}{2}\left[ {x_{(k)} + x_{(k + 1)} } \right]} \hfill & {{\text{for}}\;n = 2k} \hfill \\ \end{array} } \right. $$
(6.33)

Sample estimates of other quantiles may be obtained using x (i) as an estimate of x q for q = i/(n + 1) and then interpolating between observations to obtain \( \hat{x}_{p} \) for the desired p. This only works for 1/(n + 1) ≤ p ≤ n/(n + 1) and can yield rather poor estimates of x p when (n + 1)p is near either 1 or n. An alternative approach is to fit a reasonable distribution function to the observations, as discussed in Sects. 6.3.1 and 6.3.2, and then estimate x p using Eq. 6.32, where F X (x) is the fitted distribution.

Another simple and common approach to describing a distribution’s center, spread, and shape is by reporting the moments of a distribution. The first moment about the origin μ X is the mean of X and is given by

$$ \mu_{X} = E[X] = \int\limits_{ - \infty }^{ + \infty } {x f_{X} (x) {\text{d}}x} $$
(6.34)

Moments other than the first are normally measured about the mean. The second moment measured about the mean is the variance , denoted Var(X) or \( \sigma_{X}^{2} \), where

$$ \sigma_{X}^{2} = {\text{Var}}(X) = E[(X - \mu_{X} )^{2} ] $$
(6.35)

The standard deviation σ X is the square root of the variance . While the mean μ X is a measure of the central value of X, the standard deviation σ X is a measure of the spread of the distribution of X about its mean μ X .

Another measure of the variability in X is the coefficient of variation ,

$$ {\text{CV}}_{X} = \frac{{\sigma_{X} }}{{\mu_{X} }} $$
(6.36)

The coefficient of variation expresses the standard deviation as a proportion of the mean. It is useful for comparing the relative variability of the flow in rivers of different sizes, or of rainfall variability in different regions, which are both strictly positive values.

The third moment about the mean denoted λ X , measures the asymmetry or skewness of the distribution

$$ {\lambda }_{X} = E[(X - \mu_{X} )^{3} ] $$
(6.37)

Typically, the dimensionless coefficient of skewness γ X is reported rather than the third moment λ X . The coefficient of skewness is the third moment rescaled by the cube of the standard deviation so as to be dimensionless and hence unaffected by the scale of the random variable

$$ \gamma_{X} = \frac{{\lambda_{X} }}{{\sigma_{X}^{3} }} $$
(6.38)

Streamflows and other natural phenomena that are necessarily nonnegative often have distributions with positive skew coefficients, reflecting the asymmetric shape of their distributions.

When the distribution of a random variable is not known, but a set of observations {x 1, …, x n } is available, the moments of the unknown distribution of X can be estimated based on the sample values using the following equations.

The sample estimate of the mean

$$ \overline{X} = \sum\limits_{i = 1}^{n} {X_{i} /n} $$
(6.39a)

The sample estimate of the variance

$$ \hat{\sigma }_{X}^{2} = S_{X}^{2} = \frac{1}{(n - 1)}\sum\limits_{i = 1}^{n} {(X_{i} - \overline{X})^{2} } $$
(6.39b)

The sample estimate of skewness

$$ \hat{\lambda }_{X} = \frac{n}{(n - 1)(n - 2)}\sum\limits_{i = 1}^{n} {(X_{i} - \overline{X})^{3} } $$
(6.39c)

The sample estimate of the coefficient of variation

$${\widehat{\text CV}_{X}}= S_{X} /\overline{X} $$
(6.39d)

The sample estimate of the coefficient of skewness

$$ \hat{\gamma }_{X} = \hat{\lambda }_{X} /S_{X}^{3} $$
(6.39e)

The sample estimate of the mean and variance are often denoted \( \bar{x}\;{\text{and}}\;{\text{s}}_{X}^{2} \). All of these sample estimators only provide estimates. Unless the sample size n is very large, the difference between the estimators from the true values of \( \mu_{X} ,\sigma_{X}^{2} ,\lambda_{X} ,{\text{CV}}_{X} ,\,{\text{and}}\,\gamma_{X} \) may be large. In many ways, the field of statistics is about the precision of estimators of different quantities. One wants to know how well the mean of 20 annual rainfall depths describes the true expected annual rainfall depth, or how large the difference between the estimated 100-year flood and the true 100-year flood is likely to be.

As an example of the calculation of moments, consider the flood data in Table 6.2. These data have the following sample moments:

$$ \begin{aligned} \bar{x} & = 1549.2 \\ s_{X} & = 813.5 \\ {\widehat{\text{CV}}}_{X} & = 0.525 \\ \hat{\gamma }_{X} & = 0.712 \\ \end{aligned} $$
Table 6.2 Annual Maximum Discharges on Magra River, Italy, at Calamazza, 1930–1970

As one can see, the data are positively skewed and have a relatively large coefficient of variance .

When discussing the accuracy of sample estimates, two quantities are often considered, bias and variance. An estimator \( \hat{\theta } \) of a known or unknown quantity \( \theta \) is a function of the values of the random variable X 1, …, X n that will be available to estimate the value of θ; \( \hat{\theta } \) may be written \( \hat{\theta } \)[X 1, X 2, …, X n ] to emphasize that \( \hat{\theta } \) itself is a random variable because its value depends on the sample values of the random variable that will be observed. An estimator \( \hat{\theta } \) of a quantity \( \theta \) is biased if \( E[\hat{\theta }] \ne \theta \) and unbiased if \( E[\hat{\theta }] = \theta \). The quantity \( \{ E[\hat{\theta }] - \theta \} \) is generally called the bias of the estimator.

An unbiased estimator has the property that its expected value equals the value of the quantity to be estimated. The sample mean is an unbiased estimate of the population mean μ X because

$$ E\left[ {\overline{X}} \right] = E\left[ {\frac{1}{n}\sum\limits_{i = 1}^{n} {X_{i} } } \right] = \frac{1}{n}\sum\limits_{i = 1}^{n} {E\left[ {X_{i} } \right]} = \mu_{X} $$
(6.40)

The estimator \( S_{X}^{2} \) of the variance of X is an unbiased estimator of the true variance \( \sigma_{X}^{2} \) for independent observations (Benjamin and Cornell 1970):

$$ E\left[ {S_{X}^{2} } \right] = \sigma_{X}^{2} $$
(6.41)

However, the corresponding estimator of the standard deviation, S X, is in general a biased estimator of σ x because

$$ E[S_{X} ] \ne \sigma_{X} $$
(6.42)

The second important statistic often used to assess the accuracy of an estimator \( \hat{\theta } \) is the variance of the estimator \( {\text{Var}}(\hat{\theta }) \), which equals \( E\{ {( {\hat{\theta } - E[ {\hat{\theta }} ]} )^{2} } \} \). For the mean of a set of independent observations, the variance of the sample mean is

$$ {\text{Var}}\left( {\overline{X}} \right) = \frac{{\sigma_{X}^{2} }}{n} $$
(6.43)

It is common to call \( \sigma_{x} /\sqrt n \) the standard error of \( \overline{X} \) rather than its standard deviation. The standard error of an average is the most commonly reported measures of the precision.

The bias measures the difference between the average value of an estimator and the quantity to be estimated. The variance measures the spread or width of the estimator’s distribution. Both contribute to the amount by which an estimator deviates from the quantity to be estimated. These two errors are often combined into the mean square error. Understanding that θ is fixed, and the estimator \( \hat{\theta } \) is a random variable, the mean squared error is the expected value of the squared distance (error) between the two

$$ {\text{MSE}}\left( {\hat{\theta }} \right) = E\left[ {\left( {\hat{\theta } - \theta } \right)^{2} } \right] = \left\{ {E\left[ {\hat{\theta }} \right] - \theta } \right\}^{2} + E\left\{ {\left( {\hat{\theta } - E\left[ {\hat{\theta }} \right]} \right)^{2} } \right\} = \left[ {\text{Bias}} \right]^{2} + {\text{Var}}\left( {\hat{\theta }} \right) $$
(6.44)

where [Bias] is \( E( {\hat{\theta }} ) - \theta \). Equation 6.44 shows that the MSE, equal to the expected average squared deviation of the estimator \( \hat{\theta } \) from the true value of the parameter θ, can be computed as the bias squared plus the variance of the estimator. MSE is a convenient measure of how closely \( \hat{\theta } \) approximates θ because it combines both bias and variance in a logical way.

Estimation of the coefficient of skewness γ x provides a good example of the use of the MSE for evaluating the total deviation of an estimate from the true population value. The sample estimate \( \hat{\gamma }_{X} \) of \( \gamma_{X} \) is often biased, has a large variance , and was shown by Kirby (1974) to be bounded so that

$$ |\hat{\gamma }_{X} | \le \sqrt n $$
(6.45)

where n is the sample size. The bounds do not depend on the true skew γ X . However, the bias and variance of \( \hat{\gamma }_{X} \) do depend on the sample size and the actual distribution of X. Table 6.3 contains the expected value and standard deviation of the estimated coefficient of skewness \( \hat{\gamma }_{X} \) when X has either a normal distribution, for which γ X  = 0, or a gamma distribution with γ X  = 0.25, 0.50, 1.00, 2.00, or 3.00. These values are adapted from Wallis et al. (1974a, b) who employed moment estimators slightly different than those in Eq. 6.39a.

Table 6.3 Sampling properties of coefficient of skewness estimator

For the normal distribution, \( E\left[ {\hat{\gamma }_{X} } \right] = 0 \) and Var\( \left[ {\hat{\gamma }_{X} } \right] \) ≅ 5/n. In this case, the skewness estimator is unbiased but highly variable. In all the other cases in Table 6.3 it is also biased.

To illustrate the magnitude of these errors , consider the mean square error of the skew estimator \( \hat{\gamma }_{X} \) calculated from a sample of size 50 when X has a gamma distribution with γ X  = 0.50, a reasonable value for annual streamflows. The expected value of \( \hat{\gamma }_{X} \) is 0.45; its variance equals (0.37)2, its standard deviation squared. Using Eq. 6.44, the mean square error of \( \hat{\gamma }_{X} \) is

$$ {\text{MSE}}\left( {\hat{\gamma }_{X} } \right) = \left( {0.45 - 0.50} \right)^{2} + \left( {0.37} \right)^{2} = 0.0025 + 0.1369 = 0.139 \cong 0.14 $$
(6.46)

An unbiased estimate of γ X is simply (0.50/0.45)\( \hat{\gamma }_{X} \). Here the estimator provided by Eq. 6.39a has been scaled to eliminate bias . This unbiased estimator has mean squared error

$$ {\text{MSE}}\left( {\frac{{0.50\hat{\gamma }_{{_{X} }} }}{0.45}} \right) = \left( {0.50-0.50} \right)^{2} + \left[ {\left( {\frac{0.50}{0.45}} \right)(0.37)} \right]^{2} = 0.169 \cong 0.17 $$
(6.47)

The mean square error of the unbiased estimator of \( \hat{\gamma }_{X} \) is larger than the mean square error of the biased estimate. Unbiasing \( \hat{\gamma }_{X} \) results in a larger mean square error for all the cases listed in Table 6.3 except for the normal distribution for which γ X  = 0, and the gamma distribution with γ X  = 3.00.

As shown here for the skew coefficient, biased estimators often have smaller mean square errors than unbiased estimators. Because the mean square error measures the total average deviation of an estimator from the quantity being estimated, this result demonstrates that the strict or unquestioning use of unbiased estimators is not advisable. Additional information on the sampling distribution of quantiles and moments is contained in Stedinger et al. (1993).

6.2.4 L-Moments and Their Estimators

L-moments are another way to summarize the statistical properties of hydrologic data based on linear combinations of the original sample (Hosking 1990). Recently, hydrologists have found that regionalization methods (to be discussed in Sect. 6.5) using L-moments are superior to methods using traditional moments (Hosking and Wallis 1997; Stedinger and Lu 1995). L-moments have also proved useful for construction of goodness-of-fit tests (Hosking et al. 1985; Chowdhury et al. 1991; Fill and Stedinger 1995), measures of regional homogeneity and distribution selection methods (Vogel and Fennessey 1993; Hosking and Wallis 1997).

The first L-moment designated as λ 1 is simply the arithmetic mean

$$ \lambda_{1} = E[X] $$
(6.48)

Now let X (i|n) be the ith largest observation in a sample of size n (i = n corresponds to the largest). Then, for any distribution, the second L-moment, λ 2, is a description of scale based upon the expected difference between two randomly selected observations.

$$ \lambda_{2} = (1/2)E\left[ {X_{(2|1)} - X_{(1|2)} } \right] $$
(6.49)

Similarly, L-moment measures of skewness and kurtosis use three and four randomly selected observations, respectively.

$$ {\lambda }_{3} = (1/3)E\left[ {X_{(3|3)} - 2X_{(2|3)} + X_{(1|3)} } \right] $$
(6.50)
$$ {\lambda }_{4} = (1/4)E\left[ {X_{(4|4)} - 3X_{(3|4)} + 3X_{(2|4)} - X_{(1|4)} } \right] $$
(6.51)

Sample estimates are often computed using intermediate statistics called probability weighted moments (PWMs). The rth probability weighted moment is defined as

$$ {\beta }_{r} = E\left\{ {X\left[ {F(X)} \right]^{r} } \right\} $$
(6.52)

where F(X) is the cumulative distribution function of X. Recommended (Landwehr et al. 1979; Hosking and Wallis 1995) unbiased PWM estimators , b r , of β r are computed as

$$ \begin{aligned} b_{0} & = \overline{X} \\ b_{1} & = \frac{1}{n(n - 1)}\sum\limits_{j = 2}^{n} {(j - 1)X_{(j)} } \\ b_{2} & = \frac{1}{n( n - 1 )( n - 2 )}\sum\limits_{j = 3}^{n} {( {j - 1} )} ( j - 2 )X_{(j )} \\ \end{aligned} $$
(6.53)

These are examples of the general formula for computing estimators b r of β r .

$$ b_{r} = \frac{1}{n}\sum\limits_{i = r}^{n} {\left( {\begin{array}{*{20}c} {i} \\ r \\ \end{array} } \right)}{X_{(i)}} / \left( {\begin{array}{*{20}c} {n - 1} \\ {r} \\ \end{array} } \right) = \frac{1}{r + 1}\sum\limits_{i = r}^{n} {\left( {\begin{array}{*{20}c} {i} \\ r \\ \end{array} } \right)}{X_{(i)}} /\left( {\begin{array}{*{20}c} n \\ {r + 1} \\ \end{array} } \right)$$
(6.54)

for r = 1, …, n − 1.

L-moments are easily calculated in terms of probability weighted moments (PWMs) using

$$ \begin{aligned} {\lambda }_{1} & = {\beta }_{0} \\ {\lambda }_{2} & = 2{\beta }_{1} - {\beta }_{0} \\ {\lambda }_{3} & = 6{\beta }_{2} - 6{\beta }_{1} + {\beta }_{0} \\ {\lambda }_{4} & = 20{\beta }_{3} - 30{\beta }_{2} + 12{\beta }_{1} - {\beta }_{0} \\ \end{aligned} $$
(6.55)

Formulas for directly calculating L-moment estimators, b, of β, are provided by Wang (1997). Measures of the coefficient of variation , skewness, and kurtosis of a distribution can be computed with L-moments, as they can with traditional product moments. Whereas skew primarily measures the asymmetry of a distribution, the kurtosis is an additional measure of the thickness of the extreme tails. Kurtosis is particularly useful for comparing symmetric distributions that have a skewness coefficient of zero. Table 6.4 provides definitions of the traditional coefficient of variation, coefficient of skewness , and coefficient of kurtosis, as well as the L-moment, L-coefficient of variation, L-coefficient of skewness, and L-coefficient of kurtosis.

Table 6.4 Definitions of dimensionless product-moment and L-moment ratios

The flood data in Table 6.2 can be used to provide an example of L-moments . Equation 6.53 yields estimates of the first three Probability Weighted Moments

$$ \begin{aligned} b_{0} & = 1549.20 \\ b_{1} & = 1003.89 \\ b_{2} & = 759.02 \\ \end{aligned} $$
(6.56)

Recall b 0 is just the sample average \( \bar{x} \). The sample L-moments are easily calculated using the probability weighted moments (PWMs). One obtains

$$ \begin{aligned} \hat{\lambda }_{1} & = b_{0} = 1549 \\ \hat{\lambda }_{2} & = 2b_{1} - b_{0} = 458 \\ \hat{\lambda }_{3} & = 6b_{2} - 6b_{1} + b_{0} = 80 \\ \end{aligned} $$
(6.57)

Thus the sample estimates of the L-Coefficient of Variation , t 2, and L-Coefficient of Skewness , t 3, are

$$ \begin{aligned} t_{2} & = 0.295 \\ t_{3} & = 0.174 \\ \end{aligned} $$
(6.58)

6.3 Distributions of Random Events

A frequent task in water resources planning is the development of a model of some probabilistic or stochastic phenomena such as streamflows, flood flows, rainfall, temperatures, evaporation , sediment or nutrient loads, nitrate or organic compound concentrations, or water demands. This often requires that one fit a probability distribution function to a set of observed values of the random variable . Sometimes, one’s immediate objective is to estimate a particular quantile of the distribution, such as the 100-year flood, 50-year 6-h-rainfall depth , or the minimum seven-day-average expected once-in-10-year flow. Then the fitted distribution and its statistical parameters can characterize that random variable. In a stochastic simulation , fitted distributions are used to generate possible values of the random variable in question.

Rather than fitting a reasonable and smooth mathematical distribution, one could use the empirical distribution represented by the data to describe the possible values that a random variable may assume in the future and their frequency. In practice, the true mathematical form for the distribution that describes the events is not known. Moreover, even if it was, its functional form may have too many parameters to be of much practical use. Thus using the empirical distribution represented by the data itself has substantial appeal.

Generally the free parameters of the theoretical distribution are selected (estimated) so as to make the fitted distribution consistent with the available data. The goal is to select a physically reasonable and simple distribution to describe the frequency of the events of interest , to estimate that distribution’s parameters, and ultimately to obtain quantiles, performance indices, and risk estimates of satisfactory accuracy for the problem at hand. Use of a theoretical distribution does have several advantages over use of the empirical distribution

  1. 1.

    It presents a smooth interpretation of the empirical distribution. As a result quantiles, performance indices, and other statistics computed using the fitted distribution should be more easily estimated compared to those computed from the empirical distribution.

  2. 2.

    It provides a compact and easy to use representation of the data.

  3. 3.

    It is likely to provide a more realistic description of the range of values that the random variable may assume and their likelihood; for example, using the empirical distribution one often assumes that no values larger or smaller than the sample maximum or minimum can occur. For many situations this is unreasonable.

  4. 4.

    Often one needs to estimate the likelihood of extreme events that lie outside of the range of the sample (either in terms of x values or in terms of frequency); such extrapolation makes little sense with the empirical distribution.

  5. 5.

    In many cases one is not interested in X, but instead is interested in derived variables Y that are functions of X. This could be a performance function for some system. If Y is the performance function, interest might be primarily in its mean value E[Y], or the probability some standard is exceeded, Pr{Y > standard}. For some theoretical X-distributions, the resulting Y-distribution may be available in closed form making the analysis rather simple. (The normal distribution works with linear models , the lognormal distribution with product models, and the gamma distribution with queuing systems.)

This section provides a brief introduction to some useful techniques for estimating the parameters of probability distribution functions and determining if a fitted distribution provides a reasonable or acceptable model of the data. Subsections are also included on families of distributions based on the normal, gamma, and generalized-extreme-value distributions. These three families have found frequent use in water resource planning (Kottegoda and Rosso 1997).

6.3.1 Parameter Estimation

Given a set of observations to which a distribution is to be fit, one first selects a distribution function to serve as a model of the distribution of the data. The choice of a distribution may be based on experience with data of that type, some understanding of the mechanisms giving rise to the data, and/or examination of the observations themselves. One can then estimate the parameters of the chosen distribution and determine if the observed data could have been drawn from the fitted distribution. If not, the fitted distribution is judged to be unacceptable.

In many cases, good estimates of a distribution’s parameters are obtained by the maximum likelihood-estimation procedure. Given a set of n independent observations {x 1, …, x n } of a continuous random variable X, the joint probability density function for the observations is

$$ f_{{X_{1} ,X_{2} , X_{3} }} , \ldots_{{X_{n} }} ( {x_{1} ,\, \ldots ,\,x_{n} |\theta } ) = f_{X} ( {x_{1} |\theta } ) \cdot f_{X} ( {x_{2} |\theta } )\, \ldots \,f_{X} ( {x_{n} |\theta } ) $$
(6.59)

where θ is the vector of the distribution’s parameters.

The maximum likelihood estimator of θ is that vector which maximizes Eq. 6.59 and thereby makes it as likely as possible to have observed the values {x 1, …, x n }.

Considerable work has gone into studying the properties of maximum likelihood parameter estimates. Under rather general conditions, asymptotically the estimated parameters are normally distributed, unbiased, and have the smallest possible variance of any asymptotically unbiased estimator (Bickel and Doksum 1977). These, of course, are asymptotic properties, valid for large sample sizes n. Better estimation procedures, perhaps yielding biased parameter estimates, may exist for small sample sizes. Stedinger (1980) provides such an example. Still, maximum likelihood procedures are to be highly recommended with moderate and large samples, even though the iterative solution of nonlinear equations is often required.

An example of the maximum likelihood procedure for which closed-form expressions for the parameter estimates are obtained is provided by the lognormal distribution. The probability density function of a lognormally distributed random variable X is

$$ f_{X} ( x ) = \frac{1}{{x\sqrt {2\pi \sigma^{2} } }} \exp \left\{ { - \frac{1}{{2\sigma^{2} }}\left[ {\ln(x) - \mu } \right]^{2} } \right\} $$
(6.60)

Here the parameters μ and σ 2 are the mean and variance of the logarithm of X, and not of X itself.

Maximizing the logarithm of the joint density for {x 1, …, x n } is more convenient than maximizing the joint probability density itself. Hence the problem can be expressed as the maximization of the log-likelihood function

$$ \begin{aligned} L & = \ln {\left[ \prod\limits_{i = 1}^{n} {f\left( {x_{i} \left| {\mu , \sigma } \right.} \right)} \right]} \\ & = \sum\limits_{i = 1}^{n} {\ln f\left( {x_{i} \left| {\mu , \sigma } \right.} \right)} \\ & = - \sum\limits_{i = 1}^{n} {\ln \left( {x_{i} \sqrt {2\pi } } \right)}\\&\quad - n \ln(\sigma ) - \frac{1}{{2\sigma^{2} }}\sum\limits_{i = 1}^{n} {\left[ {\ln (x_{i} ) - \mu } \right]}^{2} \\ \end{aligned} $$
(6.61)

The maximum can be obtained by equating to zero the partial derivatives ∂L/∂μ and ∂L/∂σ whereby one obtains

$$ \begin{aligned} 0 & = \frac{\partial L}{\partial \mu } = \frac{1}{{\sigma^{2} }}\sum\limits_{i = 1}^{n} {\left[ {\ln \left( {x_{i} } \right) - \mu } \right]} \\ 0 & = \frac{\partial L}{\partial \sigma } = - \frac{n}{\sigma } + \frac{1}{{\sigma^{3} }}\sum\limits_{i = 1}^{n} {\left[ {\ln \left( {x_{i} } \right) - \mu } \right]}^{2} \\ \end{aligned} $$
(6.62)

These equations yield the estimators

$$ \begin{aligned} & \hat{\mu } = \frac{1}{n}\sum\limits_{i = 1}^{n} {\ln \left( {x_{i} } \right)} \\ & \hat{\sigma }^{{^{2} }} = \frac{1}{n}\sum\limits_{i = 1}^{n} {\left[ {\ln \left( {x_{i} } \right) - \hat{\mu }} \right]}^{2} \\ \end{aligned} $$
(6.63)

The second-order conditions for a maximum are met and these values do maximize Eq. 6.59. It is useful to note that if one defines a new random variable Y = ln(X), then the maximum likelihood estimates of the parameters μ and σ 2, which are the mean and variance of the Y distribution, are the sample estimates of the mean and variance of Y

$$ \begin{aligned} & \hat{\mu } = \bar{y} \\ & \hat{\sigma }^{2} = [(n - 1)/n]s_{Y}^{2} \\ \end{aligned} $$
(6.64)

The correction [(n − 1)/n] in this last equation is often neglected.

The second commonly used parameter estimation procedure is the method of moments . The method of moments is often a quick and simple method for obtaining parameter estimates for many distributions. For a distribution with m = 1, 2, or 3 parameters, the first m moments of postulated distribution in Eqs. 6.34, 6.35, and 6.37 are equated to the estimates of those moments calculated using Eqs. 6.39a. The resulting nonlinear equations are solved for the unknown parameters.

For the lognormal distribution, the mean and variance of X as a function of the parameters μ and σ are given by

$$ \begin{aligned} \mu_{X} & = \exp \left( {\mu + \frac{1}{2}\sigma^{2} } \right) \\ \sigma_{X}^{2} & = \exp \left( {2\mu + \sigma^{2} } \right)\left[ {\exp \left( {\sigma^{2} } \right) - 1} \right] \\ \end{aligned} $$
(6.65)

Substituting \( \bar{x} \) for μ X and \( s_{x}^{2} \) for \( \sigma_{X}^{2} \) and solving for μ and σ 2 one obtains

$$ \begin{aligned} \hat{\sigma }^{2} & = \ln\left( {1 + s_{X}^{2} /\bar{x}^{2} } \right) \\ \hat{\mu } & = \ln\left( {\frac{{\bar{x}}}{{\sqrt {1 + s_{X}^{2} /\bar{x}^{2} } }}} \right) = \ln \bar{x} - \frac{1}{2}\hat{\sigma }^{2} \\ \end{aligned} $$
(6.66)

The data in Table 6.2 provide an illustration of both fitting methods. One can easily compute the sample mean and variance of the logarithms of the flows to obtain

$$ \begin{aligned} & \hat{\mu } = 7.202 \\ & \hat{\sigma }^{2} = 0.3164 = (0.5625)^{2} \\ \end{aligned} $$
(6.67)

Alternatively, the sample mean and variance of the flows themselves are

$$ \begin{aligned} \bar{x} & = 1549.2 \\ s_{X}^{2} & = 661{,}800 = (813.5)^{2} \\ \end{aligned} $$
(6.68)

Substituting those two values in Eq. 6.66 yields

$$ \begin{aligned} \hat{\mu }& = 7.224 \\ \hat{\sigma }^{2} & = 0.2435 = (0.4935)^{2} \\ \end{aligned} $$
(6.69)

Method of moments and maximum likelihood are just two of many possible estimation methods. Just as method of moments equates sample estimators of moments to population values and solves for a distribution’s parameters, one can simply equate L-moment estimators to population values and solve for the parameters of a distribution. The resulting method of L-moments has received considerable attention in the hydrologic literature (Landwehr et al. 1978; Hosking et al. 1985; 1987; Hosking 1990; Wang 1997). It has been shown to have significant advantages when used as a basis for regionalization procedures that will be discussed in Sect. 6.5 (Lettenmaier et al. 1987; Stedinger and Lu 1995; Hosking and Wallis 1997).

Bayesian procedures provide another approach that is related to maximum likelihood estimation. Bayesian inference employs the likelihood function to represent the information in the data. That information is augmented with a prior distribution that describes what is known about constraints on the parameters and their likely values beyond the information provided by the recorded data available at a site. The likelihood function and the prior probability density function are combined to obtain the probability density function that describes the posterior distribution of the parameters

$$ f_{\theta } (\varvec{\theta}|x_{1} ,x_{2} ,\, \ldots ,\, x_{n} ) \propto f_{X} (x_{1} ,x_{2} ,\, \ldots ,\,x_{n} |\varvec{\theta}){\xi }(\varvec{\theta}) $$
(6.70)

The symbol means “proportional to” and ξ(θ) is the probability density function for the prior distribution for θ (Kottegoda and Rosso 1997). Thus, except for a constant of proportionality, the probability density function describing the posterior distribution of the parameter vector θ is equal to the product of the likelihood function f X (x 1, x 2, …, x n |θ) and the probability density function for the prior distribution ξ(θ) for θ.

Advantages of the Bayesian approach are that it allows the explicit modeling of uncertainty in parameters (Stedinger 1997; Kuczera 1999), and provides a theoretically consistent framework for integrating systematic flow records with regional and other hydrologic information (Vicens et al. 1975; Stedinger 1983; and Kuczera 1983). Martins and Stedinger (2000) illustrate how a prior distribution can be used to enforce realistic constraints upon a parameter as well as providing a description of its likely values. In their case use of a prior of the shape parameter κ of a GEV distribution allowed definition of generalized maximum likelihood estimators that over the κ-range of interest performed substantially better than maximum likelihood, moment, and L-moment estimators.

While Bayesian methods have been available for decades, the computational challenge posed by the solution of Eq. 6.70 has been an obstacle to their use. Solutions to Eq. 6.70 have been available for special cases such as normal data, and binomial and Poisson samples (Raiffa and Schlaifier 1961; Benjamin and Cornell 1970; Zellner 1971). However, a new and very general set of Markov Chain Monte Carlo (MCMC) procedures allow numerical computation of the posterior distributions of parameters for a very broad class of models (Gilks et al. 1996). As a result, Bayesian methods are now becoming much more popular, and are the standard approach for many difficult problems that are not easily addressed by traditional methods (Gelman et al. 1995; Carlin and Louis 2000). The use of Monte Carlo Bayesian methods in flood frequency analysis, rainfall-runoff modeling, and evaluation of environmental pathogen concentrations are illustrated by Wang (2001), Bates and Campbell (2001) and Crainiceanu et al. (2002) respectively.

Finally, a simple method of fitting flood frequency curves is to plot the ordered flood values on special probability paper and then to draw a line through the data (Gumbel 1958). Even today, that simple method is still attractive when some of the smallest values are zero or unusually small, or have been censored as will be discussed in Sect. 6.4 (Kroll and Stedinger 1996). Plotting the ranked annual maximum series against a probability scale is always an excellent and recommended way to see what the data look like and for determining whether a fitted curve is or is not consistent with the data (Stedinger et al. 1993).

Statisticians and hydrologists have investigated which of these methods most accurately estimates the parameters themselves or the quantiles of the distribution (Stedinger 1997). One also needs to determine how accuracy should be measured. Some studies have used average squared deviations, some have used average absolute weighted deviations with different weights on under- and over-estimation, and some have used the squared deviations of the log-quantile estimator (Slack et al. 1975; Kroll and Stedinger 1996). In almost all cases, one is also interested in the bias of an estimator, which is the average value of the estimator minus the true value of the parameter or quantile being estimated. Special estimators have been developed to compute design events that on average are exceeded with the specified probability, and have the anticipated risk of being exceeded (Beard 1960, 1997; Rasmussen and Rosbjerg 1989, 1991a, b; Stedinger 1997; Rosbjerg and Madsen 1998).

6.3.2 Model Adequacy

After estimating the parameters of a distribution, some check of model adequacy should be made. Such checks vary from simple comparisons of the observations with the fitted model using graphs or tables, to rigorous statistical tests. Some of the early and simplest methods of parameter estimation were graphical techniques. Although quantitative techniques are generally more accurate and precise for parameter estimation, graphical presentations are invaluable for comparing the fitted distribution with the observations for the detection of systematic or unexplained deviations between the two. The observed data will plot as a straight line on probability graph paper if the postulated distribution is the true distribution of the observation. If probability graph paper does not exist for the particular distribution of interest , more general techniques can be used.

Let x (i) be the ith largest value in a set of observed values {x i } so that x (1) ≤ x (2) ≤ ··· ≤ x (n). The random variable X (i) provides a reasonable estimate of the pth quantile x p of the true distribution of X for p = i/(n + 1). In fact, if one thinks of the cumulative probability U i associated with the random variable X (i), U i  = F X (X (i)), then if the observations X (i) are independent, the U i have a beta distribution (Gumbel 1958) with probability density function

$$ \begin{aligned} f_{{U_{i} }} (u) & = \frac{n!}{(i - 1) ! (n - 1) !} u^{i - 1} (1 - u)^{n - i} \\& \quad 0 \le u \le 1 \end{aligned} $$
(6.71)

This beta distribution has mean and variance of

$$ E\left[ {U_{i} } \right] = \frac{i}{n + 1} $$
(6.72a)

and

$$ {\text{Var}}(U_{i} ) = \frac{i(n - i + 1)}{{(n + 1)^{2} (n + 2)}} $$
(6.72b)

A good graphical check of the adequacy of a fitted distribution G(x) is obtained by plotting the observations x (i) versus G −1[i/(n + 1)] (Wilk and Gnanadesikan 1968). Even if G(x) exactly equaled the true X-distribution F X [x], the plotted points will not fall exactly on a 45-degree line through the origin of the graph. This would only occur if F X [x (i)] exactly equaled i/(n + 1) and therefore each x (i) exactly equaled F −1 X [i/(n + 1)].

An appreciation for how far an individual observation x (i) can be expected to deviate from G −1[i/(n + 1)] can be obtained by plotting G−1[u (0.75) i ] and G −1[u (0.25) i ], where u (0.75) i and u (0.25) i are the upper and lower quantiles of the distribution of U i obtained from integrating the probability density function in Eq. 6.71. The required incomplete beta function is also available in many software packages, including Microsoft Excel. Stedinger et al. (1993) report that u (1) and (1 – u (n)) fall between 0.052/n and 3(n + 1) with a probability of 90%, thus illustrating the great uncertainty associated with those values.

Figure 6.2a, b illustrate the use of this quantile-quantile plotting technique by displaying the results of fitting a normal and a lognormal distribution to the annual maximum flows in Table 6.2 for the Magra River, Italy, at Calamazza for the years 1930–1970. The observations of X (i), given in Table 6.2, are plotted on the vertical axis against the quantiles G −1[i/(n + 1)] on the horizontal axis.

Fig. 6.2
figure 2

Plots of annual maximum discharges of Magra River, Italy, versus quantiles of fitted a normal and b lognormal distributions

A probability plot is essentially a scatter plot of the sorted observations X (i) versus some approximation of their expected or anticipated value, represented by G −1(p i ), where, as suggested, p i  = i/(n + 1). The p i values are called plotting positions . A common alternative to i/(n + 1) is (i − 0.5)/n, which results from a probabilistic interpretation of the empirical distribution of the data. Many reasonable plotting position formula have been proposed based upon the sense in which G −1(p i ) should approximate X (i). The Weibull formula i/(n + 1) and the Hazen formula (i − 0.5)/n bracket most of the reasonable choices. Popular formulas are summarized in Stedinger et al. (1993), who also discuss the generation of probability plots for many distributions commonly employed in hydrology.

Rigorous statistical tests are available for trying to determine whether or not it is reasonable to assume that a given set of observations could have been drawn from a particular family of distributions. Although not the most powerful of such tests, the KolmogorovSmirnov test provides bounds within which every observation should lie if the sample is actually drawn from the assumed distribution. In particular, for G = F X , the test specifies that

$$ \begin{aligned} &\Pr \left[ {G^{ - 1} \left( {\frac{i}{n} - C_{\alpha } } \right) \le X_{(i)} \le G^{ - 1} \left( {\frac{i - 1}{n} + C_{\alpha } } \right){\text{for every i}}} \right] \\& \quad = 1 - \alpha\end{aligned} $$
(6.73)

where C α is the critical value of the test at significance level α. Formulas for C α as a function of n are contained in Table 6.5 for three cases: (1) when G is completely specified independent of the sample’s values; (2) when G is the normal distribution and the mean and variance are estimated from the sample with \( \bar{x}{\text{ and }}s_{x}^{2} \); and (3) when G is the exponential distribution and the scale parameter is estimated as \( 1/(\bar{x}) \). Chowdhury et al. (1991) provide critical values for the Gumbel and GEV distribution with known shape parameter κ. For other distributions, the values obtained from Table 6.5 may be used to construct approximate simultaneous confidence intervals for every X (i).

Table 6.5 Critical values α of Kolmogorov-Smirnov statistic as a function of sample size n

Figures 6.2 contain 90% confidence intervals for the plotted points constructed in this manner. For the normal distribution, the critical value of C α equals \( 0.819/(\sqrt n - 0.01 + 0.85/\sqrt {n)} \), where 0.819 corresponds to α = 0.10. For n = 40, one computes C α  = 0.127. As can be seen in Fig. 6.2a, the annual maximum flows are not consistent with the hypothesis that they were drawn from a normal distribution; three of the observations lie outside the simultaneous 90% confidence intervals for all points. This demonstrates a statistically significant lack of fit. The fitted normal distribution underestimates the quantiles corresponding to small and large probabilities while overestimating the quantiles in an intermediate range. In Fig. 6.2b, deviations between the fitted lognormal distribution and the observations can be attributed to the differences between F X (x (i)) and i/(n + 1). Generally, the points are all near the 45-degree line through the origin, and no major systematic deviations are apparent.

The KolmogorovSmirnov test conveniently provides bounds within which every observation on a probability plot should lie if the sample is actually drawn from the assumed distribution, and thus is useful for visually evaluating the adequacy of a fitted distribution. However, it is not the most powerful test available for evaluating from which of several families a set of observations is likely to have been drawn. For that purpose several other more analytical tests are available (Filliben 1975; Hosking 1990; Chowdhury et al. 1991; Kottegoda and Rosso 1997).

The Probability Plot Correlation test is a popular and powerful test of whether a sample has been drawn from a postulated distribution, though it is often weaker than alternative tests at rejecting thin-tailed alternatives (Filliben 1975; Fill and Stedinger 1995). A test with greater power has a greater probability of correctly determining that a sample is not from the postulated distribution. The Probability Plot Correlation Coefficient test employs the correlation r between the ordered observations x (i) and the corresponding fitted quantiles w i = G −1(p i ), determined by plotting positions p i for each x (i). Values of r near 1.0 suggest that the observations could have been drawn from the fitted distribution: r measures the linearity of the probability plot providing a quantitative assessment of fit. If \( \bar{x} \) denotes the average value of the observations and \( \bar{w} \) denotes the average value of the fitted quantiles, then

$$ r = \frac{{\sum {\left( {x_{(i)} - \bar{x}} \right)\left( {w_{i} - \bar{w}} \right)} }}{{\left[ {\left( {\sum {\left( {x_{(i)} - \bar{x}} \right)^{2} \sum {\left( {w_{i} - \bar{w}} \right)} }^{2} } \right)} \right]^{0.5} }} $$
(6.74)

Table 6.6 provides critical values for r for the normal distribution, or the logarithms of lognormal variates, based upon the Blom plotting position that has p i  = (i − 3/8)/(n + 1/4). Values for the Gumbel distribution are reproduced in Table 6.7 for use with the Gringorten plotting position p i  = (i − 0.44)/(n + 0.12). The table also applies to logarithms of Weibull variates (Stedinger et al. 1993). Other tables are available for the GEV (Chowdhury et al. 1991), the Pearson type 3 (Vogel and McMartin 1991), and exponential and other distributions (D’Agostion and Stephens 1986).

Table 6.6 Lower critical values of the probability plot correlation test statistic for the normal distribution using p i  = (i  3/8)/(n + 1/4) (Vogel 1987)
Table 6.7 Lower critical values of the probability plot correlation test statistic for the Gumbel distribution using p i  = (i − 0.44)/(n + 0.12) (Vogel 1987)

L-moment ratios appear to provide goodness-of-fit tests that are superior to both the Kolmogorov–Smirnov and the Probability Plot Correlation test (Hosking 1990; Chowdhury et al. 1991; Fill and Stedinger 1995). For normal data, the L-skewness estimator \( \hat{\tau }_{3} \) (or t 3) would have mean zero and Var[\( \hat{\tau }_{3} \)] = (0.1866 + 0.8/n)/n, allowing construction of a powerful test of normality against skewed alternatives using the normally distributed statistic

$$ Z = t_{3} /\sqrt {\left( {0.1866 + 0.8/n} \right)/n} $$
(6.75)

with a reject region |Z| > z α/2.

Chowdhury et al. (1991) derive the sampling variance of the L-CV and L-skewness estimators \( \hat{\tau }_{2} \) and \( \hat{\tau }_{3} \) as a function of κ for the GEV distribution. These allow construction of a test of whether a particular data set is consistent with a GEV distribution with a regionally estimated value of κ, or a regional κ and CV. Fill and Stedinger (1995) show that the \( \hat{\tau }_{3} \) L-skewness estimator provides a test for the Gumbel versus a general GEV distribution using the normally distributed statistic

$$ Z = (\hat{\tau }_{3} - 0.17)/\sqrt {\left( {0.2326 + 0.70/n} \right)/n} $$
(6.76)

with a reject region |Z| > z α/2.

The literature is full of goodness-of-fit tests. Experience indicates that among the better tests there is often not a great deal of difference (D’Agostion and Stephens 1986). Generation of a probability plot is most often a good idea because it allows the modeler to see what the data look like and where problems occur. The Kolmogorov–Smirnov test helps the eye interpret a probability plot by adding bounds to a graph illustrating the magnitude of deviations from a straight line that are consistent with expected variability . One can also use quantiles of a beta distribution to illustrate the possible error in individual plotting positions , particularly at the extremes where that uncertainty is largest. The probability plot correlation test is a popular and powerful goodness-of-fit statistic. Goodness-of-fit tests based upon sample estimators of the L-skewness \( \hat{\tau }_{3} \) for the normal and Gumbel distribution provide simple and useful tests that are not based on a probability plot.

6.3.3 Normal and Lognormal Distributions

The normal distribution and its logarithmic transformation, the lognormal distribution, are arguably the most widely used distributions in science and engineering. The density function of a normal random variable is

$$ \begin{aligned} f_{X} (x) &= \frac{1}{{\sqrt {2\pi \sigma^{2} } }}\exp\left[ { - \frac{1}{{2\sigma^{2} }}(x - \mu )^{2} } \right]\\& \quad {\text{for}} - \infty < x < + \infty \end{aligned} $$
(6.77)

where μ and σ 2 are equivalent to μ X and \( \sigma_{X}^{2} \), the mean and variance of X. Interestingly, the maximum likelihood estimators of μ and σ 2 are almost identical to the moment estimates \( \bar{x} \) and \( s_{X}^{2} \).

The normal distribution is symmetric about its mean μ X and admits values from −∞ to +∞. Thus it is not always satisfactory for modeling physical phenomena such as streamflows or pollutant concentrations, which are necessarily nonnegative and have skewed distributions. A frequently used model for skewed distributions is the lognormal distribution. A random variable X has a lognormal distribution if the natural logarithm of X, ln(X), has a normal distribution. If X is lognormally distributed, then by definition ln(X) is normally distributed, so that the density function of X is

$$ \begin{aligned} f_{X} (x) & = \frac{1}{{\sqrt {2\pi \sigma^{2} } }}\exp \left\{ { - \frac{1}{{2\sigma^{2} }}\left[ {\ell n(x) - \mu } \right]^{2} } \right\}\frac{{{\text{d}}(\ell nx)}}{{{\text{d}}x}} \\ & = \frac{1}{{x\sqrt {2\pi \sigma^{2} } }}\exp \left\{ { - \frac{1}{{2\sigma^{2} }}\left[ {\ell n(x/\eta )} \right]^{2} } \right\} \\ \end{aligned} $$
(6.78)

for x > 0 and μ = ln(η). Here η is the median of the X-distribution. The coefficient of skewness for the three-parameter lognormal distribution is given by

$$ \gamma = 3\upsilon + \upsilon^{3} \quad{\text{where}}\;\upsilon = [\exp \left( {{\sigma}^{2} } \right) - 1]^{0.5} $$
(6.79)

A lognormal random variable takes on values in the range [0, +∞]. The parameter μ determines the scale of the X-distribution whereas σ 2 determines the shape of the distribution. The mean and variance of the lognormal distribution are given in Eq. 6.65. Figure 6.3 illustrates the various shapes the lognormal probability density function can assume. It is highly skewed with a thick right-hand tail for σ > 1, and approaches a symmetric normal distribution as σ → 0. The density function always has a value of zero at x = 0. The coefficient of variation and skew are

Fig. 6.3
figure 3

Lognormal probability density functions with various standard deviations σ

$$ \begin{aligned} {\text{CV}}_{X} & = [\exp (\sigma^{2} ) - 1]^{1/2} \\ {\gamma }_{X} & = 3{\text{CV}}_{X} + {\text{CV}}_{X}^{3} \\ \end{aligned} $$
(6.80)

The maximum likelihood estimates of μ and σ 2 are given in Eq. 6.63 and the moment estimates in Eq. 6.66. For reasonable-size samples, the maximum likelihood estimates are generally performed as well or better than the moment estimates (Stedinger 1980).

The data in Table 6.2 were used to calculate the parameters of the lognormal distribution that would describe the flood flows and the results are reported after Eq. 6.66. The two-parameter maximum likelihood and method of moments estimators identify parameter estimates for which the distribution skewness coefficients are 2.06 and 1.72, which is substantially greater than the sample skew of 0.712.

A useful generalization of the two-parameter lognormal distribution is the shifted lognormal or three-parameter lognormal distribution obtained when ln(X − τ) is described by a normal distribution, where X ≥ τ. Theoretically, τ should be positive if for physical reasons X must be positive; practically, negative values of τ can be allowed when the resulting probability of negative values of X is sufficiently small.

Unfortunately, maximum likelihood estimates of the parameters μ, σ 2, and τ are poorly behaved because of irregularities in the likelihood function (Giesbrecht and Kempthorne 1976). The method of moments does fairly well when the skew of the fitted distribution is reasonably small. A method that does almost as well as the moment method for low-skew distributions, and much better for highly skewed distributions, estimates τ by

$$ \hat{\tau } = \frac{{x_{(1)} x_{(n)} - \hat{x}_{0.50}^{2} }}{{x_{(1)} + x_{(n)} - 2 \hat{x}_{0.50} }} $$
(6.81)

provided that x (1) + x (n) − 2\( \hat{x} \) 0.50 > 0, where x (1) and x (n) are the smallest and largest observations and \( \hat{x}_{0.50} \) is the sample median (Stedinger 1980; Hoshi et al. 1984). If x (1) + x (n) − \( 2\hat{x}_{0.50} \) < 0, the sample tends to be negatively skewed and a three-parameter lognormal distribution with a lower bound cannot be fit with this method. Good estimates of μ and σ 2 to go with \( \hat{\tau } \) in Eq. 6.81 are (Stedinger 1980)

$$ \begin{aligned} & \hat{\mu } = \ell n\left[ {\frac{{\bar{x} - \hat{\tau }}}{{\sqrt {1 + s_{x}^{2} /\left( {\bar{x} - \hat{\tau }} \right)^{2} } }}} \right] \\ & \hat{\sigma }^{2} = \ell n\left[ {1 + \frac{{s_{x}^{2} }}{{\left( {\bar{x} - \hat{\tau }} \right)^{2} }}} \right] \\ \end{aligned} $$
(6.82)

For the data in Table 6.2, Eq. 6.82 yields the hybrid moment-of-moments estimates for the three-parameter lognormal distribution

$$ \begin{aligned} & {\hat{\mu }} = 7.606 \\ & {\hat{\sigma }}^{2} = 0.1339 = (0.3659)^{2} \\ & {\hat{\tau }} = - 600.1 \\ \end{aligned} $$

This distribution has a coefficient of skewness of 1.19, which is more consistent with the sample skewness estimator than was the value obtained when a two-parameter lognormal distribution was fit to the data. Alternatively, one can estimate µ and σ 2 by the sample mean and variance of ln(X − \( \hat{\tau } \)) that yields the hybrid maximum likelihood estimates

$$ \begin{aligned} & {\hat{\mu }} = 7.605 \\ & {\hat{\sigma }}^{2} = 0.1407 = (0.3751)^{2} \\ & {\hat{\tau }} = - 600.1 \\ \end{aligned} $$

The two sets of estimates are surprisingly close in this instance. In this second case, the fitted distribution has a coefficient of skewness of 1.22.

Natural logarithms have been used here. One could have just as well used base 10 common logarithms to estimate the parameters; however, in that case the relationships between the log space parameters and the real-space moments change slightly (Stedinger et al. 1993, Eq. 18.2.8).

6.3.4 Gamma Distributions

The gamma distribution has long been used to model many natural phenomena, including daily, monthly, and annual streamflows as well as flood flows (Bobee and Ashkar 1991). For a gamma random variable X,

$$ \begin{aligned} f_{X} (x) & = \frac{\left| \beta \right|}{{{\varGamma} (\alpha )}}\left( {\beta x} \right)^{\alpha - 1} {\text{e}}^{ - \beta x} \quad \beta x \ge 0 \\ \mu_{X} & = \frac{\alpha }{\beta } \\ \sigma_{X}^{2} & = \frac{\alpha }{{\beta^{2} }} \\ \gamma_{X} & = \frac{2}{\sqrt \alpha } = 2{\text{CV}}_{X} \\ \end{aligned} $$
(6.83)

The gamma function, Γ(α), for integer α is (α − 1)!. The parameter α > 0 determines the shape of the distribution; β is the scale parameter. Figure 6.4 illustrates the different shapes that the probability density function for a gamma variable can assume. As α → ∞, the gamma distribution approaches the symmetric normal distribution, whereas for 0 < α < 1, the distribution has a highly asymmetric J-shaped probability density function whose value goes to infinity as x approaches zero.

Fig. 6.4
figure 4

The gamma distribution functions for various values of the shape parameter α

The gamma distribution arises naturally in many problems in statistics and hydrology. It also has a very reasonable shape for such nonnegative random variables as rainfall and streamflow. Unfortunately, its cumulative distribution function is not available in closed form, except for integer α, though it is available in many software packages including Microsoft Excel. The gamma family includes a very special case: the exponential distribution is obtained when α = 1.

The gamma distribution has several generalizations (Bobee and Ashkar 1991). If a constant τ is subtracted from X so that (X − τ) has a gamma distribution, the distribution of X is a three-parameter gamma distribution. This is also called a Pearson type 3 distribution, because the resulting distribution belongs to the third type of distributions suggested by the statistician Karl Pearson. Another variation is the log-Pearson type 3 distribution obtained by fitting the logarithms of X with a Pearson type 3 distribution. The log-Pearson distribution is discussed further in the next section.

The method of moments may be used to estimate the parameters of the gamma distribution. For the three-parameter gamma distribution

$$ \begin{aligned} \hat{\tau } & = \bar{x} - 2\left( {\frac{{s_{X} }}{{\hat{\gamma }_{X} }}} \right) \\ \hat{\alpha } & = \frac{4}{{\left( {\hat{\gamma }_{X} } \right)^{2} }} \\ \hat{\beta } & = \frac{2}{{s_{X} \gamma_{X} }} \\ \end{aligned} $$
(6.84)

where \( \bar{x}, s_{X}^{2} , {\text{and }}\hat{\gamma }_{X} \) are estimates of the mean, variance, and coefficient of skewness of the distribution of X (Bobee and Robitaille 1977).

For the two-parameter gamma distribution,

$$ \begin{aligned} \hat{\alpha } & = \frac{{\left( {\bar{x}} \right)^{2} }}{{s_{x}^{2} }} \\ \hat{\beta } & = \frac{{\bar{x}}}{{s_{x}^{2} }} \\ \end{aligned} $$
(6.85)

Again the flood record in Table 6.2 can be used to illustrate the different estimation procedures. Using the first three sample moments, one would obtain for the three-parameter gamma distribution the parameter estimates

$$ \begin{aligned} \hat{\tau } & = - 735.6 \\ \hat{\alpha } & = 7.888 \\ \hat{\beta } & = 0.003452 = 1/289.7 \\ \end{aligned} $$

Using only the sample mean and variance yields the method of moment estimators of the parameters of the two-parameter gamma distribution (τ = 0)

$$ \begin{aligned} \hat{\alpha } & = 3.627 \\ \hat{\beta } & = 0.002341 = 1/427.2 \\ \end{aligned} $$

The fitted two-parameter gamma distribution has a coefficient of skewness γ of 1.05 whereas the fitted three-parameter gamma reproduces the sample skew of 0.712. As occurred with the three-parameter lognormal distribution, the estimated lower bound for the three-parameter gamma distribution is negative (\( \hat{\tau } = - 735.6 \)) resulting in a three-parameter model that has a smaller skew coefficient than was obtained with the corresponding two-parameter model. The reciprocal of \( \hat{\beta } \) is often reported. While \( \hat{\beta } \) has inverse x-units, 1/\( \hat{\beta } \) is a natural scale parameter that has the same units as x and thus can be easier to interpret.

Studies by Thom (1958) and Matalas and Wallis (1973) have shown that maximum likelihood parameter estimates are superior to the moment estimates. For the two-parameter gamma distribution, Greenwood and Durand (1960) give approximate formulas for the maximum likelihood estimates (also Haan 1977). However, the maximum likelihood estimators are often not used in practice because they are very sensitive to the smallest observations that sometimes suffer from measurement error and other distortions.

When plotting the observed and fitted quantiles of a gamma distribution, an approximation to the inverse of the distribution function is often useful. For |γ| ≤ 3, the Wilson–Hilferty transformation

$$ x_{G} = \mu + \sigma \left[ {\frac{2}{\gamma }\left( {1 + \frac{{\gamma x_{N} }}{6} - \frac{{\gamma^{2} }}{36}} \right)^{3} - \frac{2}{\gamma }} \right] $$
(6.86)

gives the quantiles x G of the gamma distribution in terms of x N , the quantiles of the standard normal distribution. Here μ, σ, and γ are the mean, standard deviation, and coefficient of skewness of x G . Kirby (1972) and Chowdhury and Stedinger (1991) discuss this and other more complicated but more accurate approximations. Fortunately the availability of excellent approximations of the gamma cumulative distribution function and its inverse in Microsoft Excel and other packages has reduced the need for such simple approximations.

6.3.5 Log-Pearson Type 3 Distribution

The log-Pearson type 3 distribution (LP3) describes a random variable whose logarithms have a Pearson type 3 distribution. This distribution has found wide use in modeling flood frequencies and has been recommended for that purpose (IACWD 1982). Bobee (1975), Bobee and Ashkar (1991) and Griffis and Stedinger (2007a) discuss the unusual shapes that this hybrid distribution may take allowing negative values of β. The LP3 distribution has a probability density function given by.

$$ \begin{aligned} f_{X} (x) & = |{\beta }|\{{\beta }[\ln (x) - {\xi }]\}^{{{\alpha } - 1}}\\& \quad \exp \{ - {\beta }[\ln (x) - {\xi }]\} /\{ x{\varGamma} ({\alpha })\} \end{aligned} $$
(6.87)

with α > 0, and β either positive or negative. For β < 0, values are restricted to the range 0 < x < exp(ξ). For β > 0, values have a lower bound so that exp(ξ) < X. Figure 6.5 illustrates the probability density function for the LP3 distribution as a function of the skew γ of the P3 distribution describing ln(X), with σlnX  = 0.3. The LP3 density function for |γ| ≤ 2 can assume a wide range of shapes with both positive and negative skews. For |γ| = 2, the log-space P3 distribution is equivalent to an exponential distribution function which decays exponentially as x moves away from the lower bound (β > 0) or upper bound (β < 0): as a result the LP3 distribution has a similar shape. The space with −1 < γ may be more realistic for describing variables whose probability density function becomes thinner as x takes on large values. For γ = 0, the 2-parameter lognormal distribution is obtained as a special case.

Fig. 6.5
figure 5

Log-Pearson type 3 probability density functions for different values of coefficient of skewness γ

The LP3 distribution has mean and variance

$$ \begin{aligned} \mu_{X} & = {\text{e}}^{\xi } \left( {\frac{\beta }{\beta - 1}} \right)^{\alpha } \\ \sigma_{X}^{2} & = {\text{e}}^{2\xi } \left\{ {\left( {\frac{\beta }{\beta - 2}} \right)^{\alpha } - \left( {\frac{\beta }{\beta - 1}} \right)^{2\alpha } } \right\}\\&\quad {\text{for}}\;{\beta } > 2, {\text{or}}\;{\beta } < 0. \\ \end{aligned} $$
(6.88)

For 0 < β < 2, the variance is infinite.

These expressions are seldom used, but they do reveal the character of the distribution. Figures 6.6 and 6.7 provide plots of the real-space coefficient of skewness and coefficient of variation of a log-Pearson type 3 variate X as a function of the standard deviation σ Y and coefficient of skew γ Y of the log-transformation Y = ln(X). Thus the standard deviation σ Y and skew γ Y of Y are in log space. For γ Y  = 0, the log-Pearson type 3 distribution reduces to the two-parameter lognormal distribution discussed above, because in this case Y has a normal distribution. For the lognormal distribution, the standard deviation σ Y serves as the sole shape parameter, and the coefficient of variation of X for small σ Y is just σ Y . Figure 6.7 shows that the situation is more complicated for the LP3 distribution. However, for small σ Y , the coefficient of variation of X is approximately σ Y .

Fig. 6.6
figure 6

Real-space coefficient of skewness γ X for LP3 distributed X as a function of log-space standard deviation σ Y and coefficient of skewness γ Y where Y = ln(X)

Fig. 6.7
figure 7

Real-space coefficient of variation CV X for LP3 distributed X as a function of log-space standard deviation σ Y and coefficient of skewness γ Y where Y = ln(X)

Again, the flood flow data in Table 6.2 can be used to illustrate parameter estimation. Using natural logarithms, one can estimate the log-space moments with the standard estimators in Eqs. 6.39a that yield

$$ \begin{aligned} \hat{\mu } & = 7.202 \\ \hat{\sigma } & = 0.5625 \\ \hat{\gamma } & = - 0.337 \\ \end{aligned} $$

For the LP3 distribution, analysis generally focuses on the distribution of the logarithms Y = ln(X) of the flows, which would have a Pearson type 3 distribution with moments µ Y , σ Y and γ Y (IACWD 1982; Bobée and Ashkar 1991). As a result, flood quantiles are calculated as

$$ x_{p} = \exp \{ \mu_{Y} + \sigma_{Y} {K}_{p} [\gamma_{Y} ]\} $$
(6.89)

where K p [γ Y ] is a frequency factor corresponding to cumulative probability for skewness coefficient γ Y . (K p [γ Y ] corresponds to the quantiles of a three-parameter gamma distribution with zero mean, unit variance, and skewness coefficient γ Y .)

Since 1967, the recommended procedure for flood frequency analysis by federal agencies in the United States uses this distribution. Current guidelines in Bulletin 17B (IACWD 1982) suggest that the skew γ Y be estimated by a weighted average of the at-site sample skewness coefficient and a regional estimate of the skewness coefficient. Griffis and Stedinger (2007b) compare a wide range of methods that have been recommended for fitting the LP3 distribution.

6.3.6 Gumbel and GEV Distributions

The annual maximum flood is the largest flood flow during a year. One might expect that the distribution of annual maximum flood flows would belong to the set of extreme value distributions (Gumbel 1958; Kottegoda and Rosso 1997). These are the distributions obtained in the limit, as the sample size n becomes large, by taking the largest of n independent random variables. The Extreme Value (EV) type I distribution or Gumbel distribution has often been used to describe flood flows. It has the cumulative distribution function

$$ F_{X} (x) = \exp \{ - \exp [ - (x - {\xi })/{\alpha }]\} $$
(6.90)

with mean and variance of

$$ \begin{aligned} \mu_{X} & = {\xi } + 0.5772 {\alpha } \\ \sigma_{X}^{2} & = {\pi }^{2} {\alpha }^{2} /6 \cong 1.645{\alpha }^{2} \\ \end{aligned} $$
(6.91)

Its skewness coefficient has the fixed value equal to γ X  = 1.1396.

The generalized extreme value (GEV) distribution is a general mathematical expression that incorporates the type I, II, and III extreme value (EV) distributions for maxima (Gumbel 1958; Hosking et al. 1985). In recent years, it has been used as a general model of extreme events including flood flows, particularly in the context of regionalization procedures (NERC 1975; Stedinger and Lu 1995; Hosking and Wallis 1997). The GEV distribution has cumulative distribution function

$$ F_{X} (x) = \exp \{ - [1 - {\kappa }(x - {\xi })/{\alpha }]^{{1/{\kappa }}} \} \quad{\text{for}}\;{\kappa } \ne 0 $$
(6.92)

For κ > 0, floods must be less than the upper bound for κ < 0, ξ < x < ∞, whereas for κ > 0, ξ < x < ξ + α/κ (Hosking and Wallis 1987). The mean, variance , and skewness coefficient are (for κ > −1/3)

$$ \begin{aligned} {\mu }_{X} & = {\xi } + ({\alpha }/{\kappa })[1 - {\varGamma} (1 + {\kappa })], \\ \sigma_{X}^{2} & = ({\alpha }/{\kappa })^{2} \{ {\varGamma} (1 + 2{\kappa }) - [{\varGamma} (1 + {\kappa })]^{2} \} \\ {\gamma }_{X} & = {\text{Sign}}({\kappa })\{ - {\varGamma} (1 + 3{\kappa }) + 3{\varGamma} (1 + {\kappa }){\varGamma} (1 + 2{\kappa }) \\ & \quad - 2[{\varGamma} (1 + {\kappa })]^{3} \} /\{ {\varGamma} (1 + 2{\kappa }) - [{\varGamma} (1 + {\kappa })]^{2} ]\}^{3/2} \\ \end{aligned} $$
(6.93)

where Γ(1 + κ) is the classical gamma function. The Gumbel distribution is obtained when κ = 0. For |κ| < 0.3, the general shape of the GEV distribution is similar to the Gumbel distribution, though the right-hand tail is thicker for κ < 0, and thinner for κ > 0, as shown in Figs. 6.8 and 6.9.

Fig. 6.8
figure 8

GEV density distributions for selected shape parameter κ values

Fig. 6.9
figure 9

Right-hand tails of GEV distributions shown in Fig. 6.8

The parameters of the GEV distribution are easily computed using L-moments and the relationships (Hosking et al. (1985)

$$ \begin{aligned} {\kappa } & = 7.8590 c + 2.9554 c^{2} \\ {\alpha } & = {\kappa }{\lambda }_{2} /[{\varGamma} (1 + {\kappa })(1-2^{{ - {\kappa }}} )] \\ {\xi } & = {\lambda }_{1} + ({\alpha }/{\kappa })[{\varGamma} (1 + {\kappa }) - 1] \\ \end{aligned} $$
(6.94)

where

$$ c = 2{\lambda }_{2} /({\lambda }_{3} + 3{\lambda }_{2} ) - \ln (2)/\ln (3) = [2/({\tau }_{3} + 3)] - \ln (2)/\ln (3) $$

As one can see, the estimator of the shape parameter κ will depend only upon the L-skewness estimator \( \hat{\tau }_{3} \). The estimator of the scale parameter α will then depend on the estimate of κ and of λ 2. Finally, one must also use the sample mean λ 1 (Eq. 6.48) to determine the estimate of the location parameter ξ.

Using the flood data in Table 6.2 and the sample L-moments computed in Sect. 6.2, one obtains first

$$ c = - 0.000896 $$

that yields

$$ \begin{aligned} \hat{\kappa } & = - 0.007036 \\ \hat{\xi } & = 1165.20 \\ \hat{\alpha } & = 657.29 \\ \end{aligned} $$

The small value of the fitted κ parameter means that the fitted distribution is essentially a Gumbel distribution. Here ξ is a location parameter, not a lower bound, so its value resembles a reasonable x value.

Madsen et al. (1997a) show that moment estimators can provide more precise quantile estimators . Yet Martins and Stedinger (2001a, b) found that with occasional uninformative samples, the MLE estimator of κ could be entirely unrealistic resulting in absurd quantile estimators. However the use of a realistic prior distribution on κ yielded better generalized maximum likelihood estimators (GLME) than moment and L-moment estimators over the range of κ of interest .

The GMLE estimators are obtained my maximizing the log-likelihood function, augmented by a prior density function on κ. A prior distribution that reflects general worldwide geophysical experience and physical realism is in the form of a beta distribution

$$ \begin{aligned}{\pi }({\kappa }) & = {\varGamma} (p){\varGamma} (q)(0.5 + {\kappa })^{p - 1}\\&\quad (0.5 - {\kappa })^{q - 1} /{\varGamma} (p + q)\end{aligned} $$
(6.95)

for −0.5 < κ < +0.5 with p = 6 and q = 9. Moreover, this prior assigns reasonable probabilities to the values of κ within that range. For κ outside the range −0.4 to +0.2 the resulting GEV distributions do not have density functions consistent with flood flows and rainfall (Martins and Stedinger 2000). Other estimators implicitly have similar constraints . For example, L-moments restricts κ to the range κ > −1, and the method of moments estimator employs the sample standard deviation so that κ > −0.5. Use of the sample skew introduces the constraint that κ > −0.3.

Then given a set of independent observations {x 1, …, x n } drawn for a GEV distribution, the generalized likelihood function is

$$ \begin{aligned}& \ln \{ {L}({\xi },{\alpha },{\kappa }|x_{1} ,\, \ldots, \,x_{n} )\} \\& \quad = - n\ln (\alpha ) + \sum\limits_{i = 1}^{n} {\left[ {\left( {\frac{1}{\kappa } - 1} \right)\ln (y_{i} ) - (y_{i} )^{1 / \kappa } } \right]}\\& \quad \quad + \ln [{\pi }({\kappa })] \end{aligned} $$

with

$$ y_{i} = [1 - ({\kappa }/{\alpha })(x_{i} - {\xi })] $$
(6.96)

For feasible values of the parameters y i is greater than 0 (Hosking et al. 1985). Numerical optimization of the generalized likelihood function is often aided by the additional constraint that min{y 1, …, y n } ≥ ε for some small ε > 0 so as to prohibit the search generating infeasible values of the parameters for which the likelihood function is undefined. The constraint should not be binding at the final solution.

The data in Table 6.2 again provide a convenient data set for illustrating parameter estimators. The L-moment estimators were used to generate an initial solution. Numerical optimization of the likelihood function Eq. 6.96 yielded the maximum likelihood estimators of the GEV parameters

$$ \begin{aligned} \hat{\kappa } & = - 0.0359 \\ \hat{\xi } & = 1165.4 \\ \hat{\alpha } & = 620.2 \\ \end{aligned} $$

Similarly, use of the geophysical prior (Eq. 6.95) yielded the generalized maximum likelihood estimators

$$ \begin{aligned} \hat{\kappa } & = - 0.0823 \\ \hat{\xi } & = 1150.8 \\ \hat{\alpha } & = 611.4 \\ \end{aligned} $$

Here the record length of 40 years is too short to reliably define the shape parameter κ so that result of using the prior is to increase κ slightly toward the mean of the prior. The other two parameters adjust accordingly.

6.3.7 L-Moment Diagrams

This chapter has presented several families of distributions. The L-moment diagram in Fig. 6.10 illustrates the relationships between the L-kurtosis (τ 3) and L-skewness (τ 2) for a number of the families of distributions often used in hydrology. It shows that distributions with the same coefficient of skewness still differ in the thickness of their tails, described by their kurtosis. Tail shapes are important if an analysis is sensitive to the likelihood of extreme events.

Fig. 6.10
figure 10

Relationships between L-skewness and L-kurtosis for various distributions

The normal and Gumbel distributions have a fixed shape and thus are presented by single points that fall on the Pearson type 3 (P3) curve for γ = 0, and the generalized extreme value (GEV) curve for κ = 0, respectively. The L-kurtosis/L-skewness relationships for the two-parameter and three-parameter gamma or P3 distributions are identical, as they are for the two-parameter and three-parameter lognormal distributions . This is because the addition of a location parameter does not change the range of fundamental shapes that can be generated. However, for the same skewness coefficient, the lognormal distribution has a larger kurtosis than the gamma or P3 distribution and thus assigns larger probabilities to the largest events.

As the skewness of the lognormal and gamma distributions approaches zero, both distributions become normal and their kurtosis/skewness relationships merge. For the same L-skewness, the L-kurtosis of the GEV distribution is generally larger than that of the lognormal distribution. For positive κ yielding almost symmetric or even negatively skewed GEV distributions, the GEV has a smaller kurtosis than the three-parameter lognormal distribution. The latter can be negatively skewed when τ is used as an upper bound.

Figurer 6.10 also includes the three-parameter generalized Pareto distribution, whose cdf is

$$ F_{X} (x) = 1 - [1 - {\kappa }(x - {\xi })/{\alpha}]^{{1/{\kappa }}} $$
(6.97)

(Hosking and Wallis 1997). For κ = 0 it corresponds to the exponential distribution (gamma with α = 1). This point is where the Pareto and P3 distribution L-kurtosis/L-skewness lines cross. The Pareto distribution becomes increasing more skewed for κ < 0, which is the range of interest in hydrology. The generalized Pareto distribution with κ < 0 is often used to describe peaks-over-a-threshold and other variables whose density function has its maximum at their lower bound. In that range for a given L-skewness, the Pareto distribution always has a larger kurtosis than the gamma distribution. In these cases the α parameter for the gamma distribution would need to be in the range 0 < α < 1, so that both distributions would be J-shaped.

As shown in Fig. 6.10, the GEV distribution has a thicker right-hand tail than either the gamma/Pearson type 3 distribution or the lognormal distribution.

6.4 Analysis of Censored Data

There are many instances in water resources planning where one encounters censored data . A data set is censored if the values of observations that are outside a specified range of values are not specifically reported (David 1981). For example, in water quality investigations many constituents have concentrations that are reported as <T, where T is a reliable detection threshold (MacBerthouex and Brown 2002). Thus the concentration of the water quality variable of interest was too small to be reliably measured. Likewise, low-flow observations and rainfall depths can be rounded to or reported as zero. Several approaches are available for analysis of censored data sets including probability plots and probability plot regression, conditional probability models, and maximum likelihood estimators (Haas and Scheff 1990; Helsel 1990; Kroll and Stedinger 1996; MacBerthouex and Brown 2002).

Historical and physical paleoflood data provide another example of censored data . Before the beginning of a continuous measurement program on a stream or river, the stages of unusually large floods can be estimated based on the memories of humans who have experienced these events and/or physical markings in the watershed (Stedinger and Baker 1987). Before continuous measurements were taken that provided this information, the annual maximum floods that were not unusual were not recorded. These missing data are censored data. They cover periods between occasionally large floods that have been recorded or that have left some evidence of their occurrence (Stedinger and Cohn 1986).

The discussion below addresses probability plot methods for use with censored data. Probability plot methods have a long history of use with censored data because they are relatively simple to use and to understand. Moreover, recent research has shown that they are relatively efficient when the majority of values are observed, and unobserved values are known only to be below (or above) some detection limit or perception threshold that serves as a lower (or upper) bound. In such cases, probability plot regression estimators of moments and quantiles are as accurate as maximum likelihood estimators. They are almost as good as estimators computed with complete samples (Helsel and Cohn 1988; Kroll and Stedinger 1996).

Perhaps the simplest method for dealing with censored data is adoption of a conditional probability model. Such models implicitly assume that the data are drawn from one of two classes of observations: those below a single threshold, and those above the threshold. This model is appropriate for simple cases where censoring occurs because small observations are recorded as “zero,” as often happens with low-flow, low pollutant concentration, and some flood records. The conditional probability model introduces an extra parameter P 0 to describe the probability that an observation is “zero.” If r of a total of n observations were observed because they exceeded the threshold, then P 0 is estimated as (n − r)/n. A continuous distribution G X (x) is derived for the strictly positive “nonzero” values of X. Then the parameters of the G distribution can be estimated using any procedure appropriate for complete uncensored samples. The unconditional cumulative distribution function (cdf) F X (x) for any value x > 0, is then

$$ F_{X} (x) = P_{0} + \left( {1 - P_{0} } \right)G(x) $$
(6.98)

This model completely decouples the value of P 0 from the parameters that describe the G distribution.

Section 6.3.2 discusses probability plots and plotting positions useful for graphical displaying of data to allow a visual examination of the empirical frequency curve. Suppose that among n samples a detection limit is exceeded by the observations r times. The natural estimator of the exceedance probability P 0 of the perception threshold is again (n − r)/n. If the r values that exceeded the threshold are indexed by i = 1, …, r, wherein x (r) is the largest, then reasonable plotting positions within the interval [P 0, 1] are

$$ p_{i} = P_{0} + (1 - P_{0} )\left[ {(i - a)/\left( {r + 1-2a} \right)} \right] $$
(6.99)

where a defines the plotting position that is used. Helsel and Cohn (1988) show that reasonable choices for a generally make little difference. Letting a = 0 is reasonable (Hirsch and Stedinger 1987). Both papers discuss development of plotting positions when there are different thresholds, as occurs when the analytical precision of instrumentation changes over time. If there are many exceedances of the threshold so that r ≫ (1 − 2a), p i is indistinguishable from

$$ p_{i}^{\prime } = \left[ {i + (n + r) - a} \right]/\left( {n + 1-2a} \right). $$
(6.100)

where again, i = 1, …, r. These values correspond to the plotting positions that would be assigned to the largest r observations in a complete sample of n values.

The idea behind the probability plot regression estimators is to use the probability plot for the observed data to define the parameters of the whole distribution. And if a sample mean, sample variance, or quantiles are needed, then the distribution defined by the probability plot is used to fill in the missing (censored) observations so that standard estimators of the mean, of the standard deviation, and of the quantiles can be employed. Such fill-in procedures are efficient and relatively robust for fitting a distribution and estimating various statistics with censored water quality data when a modest number of the smallest observations are censored (Helsel 1990; Kroll and Stedinger 1996).

Unlike the conditional probability approach, here the below threshold probability P 0 is linked with the selected probability distribution for the above-threshold observations. The observations below the threshold are censored but are in all other respects envisioned as coming from the same distribution that is used to describe the observed above-threshold values.

When water quality data are well described by a lognormal distribution, available values ln[X (1)] ≤ ··· ≤ ln[X (r)] can be regressed upon F −1[p i ] = µ + σF −1[p i ] for i = 1, …, r, where the r largest observation in a sample of size n are available. If regression yields constant m and slope s, corresponding to population moments µ and σ, a good estimator of the pth quantile is

$$ x_{p} = \exp \left[ {m + sz_{p} } \right] $$
(6.101)

where z p is the pth quantile of the standard normal distribution. To estimate sample means and other statistics one can fill in the missing observations with

$$ x(j) = \exp \left\{ {y(j)} \right\}\quad {\text{for}}\;j = 1,\, \ldots \,,(n - r) $$
(6.102)

where

$$ y(j) = m + s{F}^{ - 1} \left\{ {P_{0} \left[ {\left( {j - a} \right)/\left( {n - r + 1-2a} \right)} \right]} \right\} $$
(6.103)

Once a complete sample is constructed, standard estimators of the sample mean and variance can be calculated, as can medians and ranges. By filling in the missing small observations, and then using complete-sample estimators of statistics of interest , the procedure is relatively insensitive to the assumption that the observations actually have a lognormal distribution.

Maximum likelihood estimators are quite flexible, and are more efficient than plotting position methods when the values of the observations are not recorded because they are below the perception threshold (Kroll and Stedinger 1996). Maximum likelihood methods allow the observations to be represented by exact values, ranges, and various thresholds that either were or were not exceeded at various times. This can be particularly important with historical flood data sets because the magnitudes of many historical floods are not recorded precisely, and it may be known that a threshold was never crossed or was crossed at most once or twice in a long period (Stedinger and Cohn 1986; Stedinger 2000; O’Connell et al. 2002). Unfortunately, maximum likelihood estimators for the LP3 distribution have proven to be problematic. However, expected moment estimators seem to do as well as MLEs with the LP3 distribution (Cohn et al. 1997, 2001).

While often a computational challenge, maximum likelihood estimators for complete samples, and samples with some observations censored, pose no conceptual challenge. One need to only write the maximum likelihood function for the data and proceed to seek the parameter values that maximizes that function. Thus if F(x|θ) and f(x|θ) are the cumulative distribution and probability density functions that should describe the data, and θ are its parameters, then for the case described above wherein x 1, …, x r are r of n observations that exceeded a threshold T, the likelihood function would be (Stedinger and Cohn 1986)

$$ {L}(\varvec{\theta}|r,n,x_{1} ,\, \ldots ,\,{x}_{r} ) = F(T|\varvec{\theta})^{(n - r)} f(x_{1} |\varvec{\theta})f(x_{2} |\varvec{\theta})\, \ldots \,f(x_{r} |\varvec{\theta}) $$
(6.104)

Here (n − r) observations were below the threshold T, and the probability an observation is below T is F(T|θ) which then appears in Eq. 6.104 to represent that observation. In addition the specific values of the r observations x 1, …, x r are available. The probability an observation is in a small interval of width δr around x i is δ r f(x i |θ). Thus strictly speaking the likelihood function also includes a term δr. Here what is known of the magnitude of all of the n observations is included in the likelihood function in the appropriate way. If all that were known of some observation was that it exceeded a threshold M, then that value should be represented by a term [1 − F(M|θ)] in the likelihood function. Similarly, if all that was known was that the value was between L and M, then a term [F(M|θ) − F(L|θ)] should be included in the likelihood function. Different thresholds can be used to describe different observations, corresponding to changes in the quality of measurement procedures. Numerical methods can be used to identify the parameter vector that maximizes the likelihood function for the data available.

6.5 Regionalization and Index-Flood Method

Research has demonstrated the potential advantages of “index flood” procedures (Lettenmaier et al. 1987; Stedinger and Lu 1995; Hosking and Wallis 1997; Madsen and Rosbjerg 1997a). The idea behind the index-flood approach is to use the data from many hydrologically “similar” basins to estimate a dimensionless flood distribution (Wallis 1980). Thus this method “substitutes space for time” using regional information to compensate for having relatively short records at each site. The concept underlying the index-flood method is that the distributions of floods at different sites in a “region” are the same except for a scale or index-flood parameter that reflects the size, rainfall, and runoff characteristics of each watershed . Research is revealing when this assumption may be reasonable. Often a more sophisticated multi-scaling model is appropriate (Gupta and Dawdy 1995a; Robinson and Sivapalan 1997).

Generally the mean is employed as the index flood. The problem of estimating the pth quantile x p is then reduced to estimating the mean for a site µ x , and the ratio x p /µ x of the pth quantile to the mean. The mean can often be estimated adequately with the record available at a site, even if that record is short. The indicated ratio is estimated using regional information. The British Flood Studies Report (NERC 1975) calls these normalized flood distributions growth curves.

Key to the success of the index-flood approach is identification of sets of basins that have similar coefficients of variation and skew. Basins can be grouped geographically, as well as by physiographic characteristics including drainage area and elevation. Regions need not be geographically contiguous. Each site can potentially be assigned its own unique region consisting of sites with which it is particularly similar (Zrinji and Burn 1994), or regional regression equations can be derived to compute normalized regional quantiles as a function of a site’s physiographic characteristics and other statistics (Fill and Stedinger 1998).

Clearly the next step for regionalization procedures, such as the index-flood method , is to move away from estimates of regional parameters that do not depend upon basin size and other physiographic parameters. Gupta et al. (1994) argue that the basic premise of the index-flood method, that the coefficient of variation of floods is relatively constant, is inconsistent with the known relationships between the coefficient of variation CV and drainage area (see also Robinson and Sivapalan 1997). Recently, Fill and Stedinger (1998) built such a relationship into an index-flood procedure using a regression model to explain variations in the normalized quantiles . Tasker and Stedinger (1986) illustrated how one might relate log-space skew to physiographic basin characteristics (see also Gupta and Dawdy 1995b). Madsen and Rosbjerg (1997b) did the same for a regional model of κ for the GEV distribution. In both studies, only a binary variable representing “region” was found useful in explaining variations in these two shape parameters.

Once a regional model of alternative shape parameters is derived, there may be some advantage to combining such regional estimators with at-site estimators employing an empirical Bayesian framework or some other weighting schemes. For example, Bulletin 17B recommends weighting at-site and regional skewness estimators, but almost certainly places too much weight on the at-site values (Tasker and Stedinger 1986). Examples of empirical Bayesian procedures are provided by Kuczera (1982), Madsen and Rosbjerg (1997b) and Fill and Stedinger (1998). Madsen and Rosbjerg’s (1997b) computation of a κ-model with a New Zealand data set demonstrates how important it can be to do the regional analysis carefully, taking into account the cross -correlation among concurrent flood records.

When one has relatively few data at a site, the index-flood method is an effective strategy for deriving flood frequency estimates. However, as the length of the available record increases it becomes increasingly advantageous to also use the at-site data to estimate the coefficient of variation as well. Stedinger and Lu (1995) found that the L-moment/GEV index-flood method did quite well for “humid regions” (CV ≈ 0.5) when n < 25, and for semiarid regions (CV ≈ 1.0) for n < 60, if reasonable care is taken in selecting the stations to be included in a regional analysis. However, with longer records it became advantageous to use the at-site mean and L-CV with a regional estimator of the shape parameter for a GEV distribution. In many cases this would be roughly equivalent to fitting a Gumbel distribution corresponding to a shape parameter κ = 0. Gabriele and Arnell (1991) develop the idea of having regions of different size for different parameters. For realistic hydrologic regions, these and other studies illustrate the value of regionalizing estimators of the shape, and often the coefficient of variation of a distribution.

6.6 Partial Duration Series

Two general approaches are available for modeling flood and precipitation series (Langbein 1949). An annual maximum series considers only the largest event in each year. A partial duration series (PDS) or peaks-over-threshold (POT) approach includes all “independent” peaks above a truncation or threshold level. An objection to using annual maximum series is that it employs only the largest event in each year, regardless of whether the second largest event in a year exceeds the largest events of other years. Moreover, the largest annual flood flow in a dry year in some arid or semiarid regions may be zero, or so small that calling them floods is misleading. When considering rainfall series or pollutant discharge events, one may be interested in modeling all events that occur within a year that exceed some threshold of interest.

Use of a partial duration series (PDS) framework avoids such problems by considering all independent peaks that exceed a specified threshold. And, one can estimate annual exceedance probabilities from the analysis of PDS. Arguments in favor of PDS are that relatively long and reliable PDS records are often available, and if the arrival rate for peaks over the threshold is large enough (1.65 events/year for the Poisson arrival with exponential-exceedance model), PDS analyses should yield more accurate estimates of extreme quantiles than the corresponding annual maximum frequency analyses (NERC 1975; Rosbjerg 1985). However, when fitting a three-parameter distribution, there seems to be little advantage from using a PDS approach over an annual maximum approach, even when the partial duration series includes many more peaks than the maximum series because both contain the same largest events (Martins and Stedinger 2001a).

A drawback of PDS analyses is that one must have criteria to identify only independent peaks (and not multiple peaks corresponding to the same event). Thus PDS analysis can be more complicated than analyses using annual maxima. Partial duration models, perhaps with parameters that vary by season, are often used to estimate expected damages from hydrologic events when more than one damage-causing event can occur in a season or within a year (North 1980).

A model of a PDS series has at least two components: first, one must model the arrival rate of events larger than the threshold level; second, one must model the magnitudes of those events. For example, a Poisson distribution has often been used to model the arrival of events, and an exponential distribution to describe the magnitudes of peaks that exceed the threshold.

There are several general relationships between the probability distribution for annual maximum and the frequency of events in a partial duration series. For a PDS model, let λ be the average arrival rate of flood peaks greater than the threshold x 0 and let G(x) be the probability that flood peaks, when they occur, are less than x > x 0, and thus those peaks fall in the range [x 0, x]. The annual exceedance probability for a flood, denoted 1/T a, corresponding to an annual return period T a, is related to the corresponding exceedance probability q e = [1 − G(x)] for level x in the partial duration series by

$$ 1/T_{\text{a}} = 1 - \exp \{ - {\uplambda}q_{\text{e}} \} = 1 - \exp \{ - 1/T_{\text{p}} \} $$
(6.105)

where T p = 1/(λq e) is the average return period for level x in the PDS.

Many different choices for G(x) may be reasonable. In particular, the Generalized Pareto distribution (GPD) is a simple distribution useful for describing floods that exceed a specified lower bound. The cumulative distribution function for the generalized three-parameter Pareto distribution is

$$ F_{X} (x) = 1 - [1 - {\kappa }(x - {\xi })/{\alpha }]^{{1/{\kappa }}} $$
(6.106)

with mean and variance

$$ \begin{aligned} {\mu }_{X} & = {\xi } + {\alpha }/(1 + {\kappa }){\kappa } \\ {\sigma }_{X}^{2} & = {\alpha }^{2} /[(1 + {\kappa })^{2} \left( {1 + 2{\kappa }} \right)] \\ \end{aligned} $$
(6.107)

where for κ < 0, ξ < x < ∞, whereas for κ > 0, ξ < x < ξ + α/κ (Hosking and Wallis 1987). A special case of the GPD is the two-parameter exponential distribution obtain with κ = 0. Method of moment estimators work relatively well (Rosbjerg et al. 1992).

Use of a generalized Pareto distribution for G(x) with a Poisson arrival model yields a GEV distribution for the annual maximum series greater than x 0 (Smith 1984; Stedinger et al. 1993; Madsen et al. 1997a). The Poisson-Pareto and Poisson-GPD models are a very reasonable description of flood risk (Rosbjerg et al. 1992). They have the advantage that they focus on the distribution of the larger flood events, and regional estimates of the GEV distribution’s shape parameter κ from annual maximum and PDS analyses can be used interchangeably. Martins and Stedinger (2001a, b) compare PDS estimation procedures a well as demonstrating that use of the three-parameter Poisson-GPD model instead of a three-parameter GEV distribution generally results in flood quantile estimators with the same precision.

Madsen and Rosbjerg (1997a) use a Poisson-GPD model as the basis of a PDS index-flood procedure. Madsen et al. (1997b) show that the estimators are fairly efficient. They pooled information from many sites to estimate the single shape parameter κ and the arrival rate where the threshold was a specified percentile of the daily flow duration curve at each site. Then at-site information was used to estimate the mean above-threshold flood. Alternatively one could use the at-site data to estimate the arrival rate as well.

6.7 Stochastic Processes and Time Series

Many important random variables in water resources are functions whose values change with time. Historical records of rainfall or streamflow at a particular site are a sequence of observations called a time series. In a time series, the observations are ordered by time, and it is generally the case that the observed value of the random variable at one time influences the distribution of the random variable at later times. This means that the observations are not independent. Time series are conceptualized as being a single observation of a stochastic process, which is a generalization of the concept of a random variable.

This section has three parts. The first presents the concept of stationarity and the basic statistics generally used to describe the properties of a stationary stochastic process . The second presents the definition of a Markov process and the Markov chain model. Markov chains are a convenient model for describing many phenomena, and are often used in synthetic flow generation and optimization models . The third part discusses the sampling properties of statistics used to describe the characteristics of many time series .

6.7.1 Describing Stochastic Processes

A random variable whose value changes through time according to probabilistic laws is called a stochastic process. An observed time series is considered to be one realization of a stochastic process, just as a single observation of a random variable is one possible value the random variable may assume. In the development here, a stochastic process is a sequence of random variables {X(t)} ordered by a discrete time index t = 1, 2, 3, ….

The properties of a stochastic process must generally be determined from a single time series or realization. To do this several assumptions are usually made. First, one generally assumes that the process is stationary, at least in the short run. This means that the probability distribution of the process is not changing over some specified interval of time. In addition, if a process is strictly stationary, the joint distribution of the random variables X(t 1), …, X(t n ) is identical to the joint distribution of X(t 1 + t), …, X(t n  + t) for any t; the joint distribution depends only on the differences t i  − t j between the times of occurrence of the events. In other words, its shape does not change over time if the distribution is stationary. In the long run, however, because of climate and land changes, many hydrologic distributions are not stationary, and just how much they will change in the future is uncertain.

For a stationary stochastic process, one can write the mean and variance as

$$ \mu_{X} = {E}\left[ {X(t)} \right] $$
(6.109)

and

$$ \sigma_{X}^{2} = {Var}\left[ {X(t)} \right] $$
(6.110)

Both are independent of time t. The autocorrelations , the correlation of X with itself, are given by

$$ \rho_{X} \left( k \right) = \frac{{\text{Cov}\left[ {X(t),X(t + k)} \right]}}{{\sigma_{X}^{2} }} $$
(6.111)

for any positive integer k (the time lag). These are the statistics most often used to describe stationary stochastic processes .

When one has available only a single time series, it is necessary to estimate the values of μ X , \( \sigma_{X}^{2} \), and ρ X (k) from values of the random variable that one has observed. The mean and variance are generally estimated essentially as they were in Eq. 6.39a.

$$ \hat{\mu }_{X} = \overline{X} = \frac{1}{T}\sum\limits_{t = 1}^{T} {X_{t} } $$
(6.112)
$$ \hat{\sigma }_{X}^{2} = \frac{1}{T}\sum\limits_{t = 1}^{T} {\left( {X_{t} - \overline{X}} \right)}^{2} $$
(6.113)

while the autocorrelations ρ X (k) can be estimated as (Jenkins and Watts 1968)

$$ \hat{\rho }_{x} \left( k \right) = r_{k} = \frac{{\sum\nolimits_{t = 1}^{T - k} {\left( {x_{t + k} - \bar{x}} \right)\left( {x_{t} - \bar{x}} \right)} }}{{\sum\nolimits_{t = 1}^{T} {\left( {x_{t} - \bar{x}} \right)^{2} } }} $$
(6.114)

The sampling distribution of these estimators depends on the correlation structure of the stochastic process giving rise to the time series . In particular, when the observations are positively correlated as is usually the case in natural streamflows or annual benefits in a river basin simulation , the variances of the estimated \( \bar{x} \, {\text{and }}\hat{\sigma }_{X}^{2} \) are larger than would be the case if the observations were independent. It is sometimes wise to take this inflation into account. Section 6.7.3 discusses the sampling distribution of these statistics.

All of this analysis depends on the assumption of stationarity for only then do the quantities defined in Eqs. 6.1096.111 have the intended meaning. Stochastic processes are not always stationary. Urban development, deforestation, agricultural development, climatic variability , and changes in regional resource management can alter the distribution of rainfall, streamflows, pollutant concentrations, sediment loads, and groundwater levels over time. If a stochastic process is not essentially stationary over the time span in question, then statistical techniques that rely on the stationary assumption cannot be employed and the problem generally becomes much more difficult.

6.7.2 Markov Processes and Markov Chains

A common assumption in many stochastic water resources models is that the stochastic process X(t) is a Markov process. A first-order Markov process has the property that the dependence of future values of the process on past values depends only on the current value. In symbols for k > 0,

$$ F_{X} [X(t + k)|X(t),X(t - 1),X(t - 2),\, \ldots ] = F_{X} [X(t + k)|X(t)] $$
(6.115)

For Markov processes , the current value summarizes the state of the processes. As a consequence, the current value of the process is often referred to as the state. This makes physical sense as well when one refers to the state or level of an aquifer or reservoir.

A special kind of Markov process is one whose state X(t) can take on only discrete values. Such a processes is called a Markov chain . Often in water resources planning, continuous stochastic processes are approximated by Markov chains. This is done to facilitate the construction of simpler stochastic models . This section presents the basic notation and properties of Markov chains.

Consider a stream whose annual flow is to be represented by a discrete random variable . Assume that the distribution of streamflows is stationary. In the following development, the continuous random variable representing the annual streamflows (or some other process) is approximated by a random variable Q y in year y, which takes on only n discrete values q i (each value representing a continuous range or interval of possible streamflows) with unconditional probabilities p i where

$$ \sum\limits_{i = 1}^{n} {p_{i} = 1} $$
(6.116)

It is frequently the case that the value of Q y+1 is not independent of Q y . A Markov chain can model such dependence. This requires specification of the transition probabilities p ij ,

$$ p_{ij} = \Pr [Q_{{{y} + 1}} = q_{j} |Q_{y} = q_{i} ] $$
(6.117)

A transition probability is the conditional probability that the next state is q j , given that the current state is q i . The transition probabilities must satisfy

$$ \sum\limits_{j = 1}^{n} {p_{ij} = 1\quad {\text{for}}\;{\text{all}}\;i} $$
(6.118)

Figure 6.11a, b show a possible set of transition probabilities in a matrix and as histograms. Each element p ij in the matrix is the probability of a transition from streamflow q i in one year to streamflow q j in the next. In this example, a low flow tends to be followed by a low flow, rather than a high flow, and vice versa.

Fig. 6.11
figure 11

a Matrix of streamflow transition probabilities showing probability of streamflow q j (represented by index j) in year y + 1 given streamflow q i (represented by index i) in year y. b Histograms (below) of streamflow transition probabilities showing probability of streamflow q j (represented by index j) in year y + 1 given streamflow q i (represented by index i) in year y

Let P be the transition matrix whose elements are p ij . For a Markov chain, the transition matrix contains all the information necessary to describe the behavior of the process. Let \( p_{i}^{y} \) be the probability that the process resides in state i in year y. Then the probability that Q y+1 = q j is the sum of the probabilities \( p_{i}^{y} \) that Q y  = q i times the probability p ij that the next state is Q y+1 given that Q y  = q i . In symbols, this relationship is written

$$ p_{j}^{y + 1} = p_{1}^{y} p_{1j} + p_{2}^{y} p_{2j} + \cdots + p_{n}^{y} p_{nj} = \sum\limits_{i = 1}^{n} {p_{i}^{y} p_{ij} } $$
(6.119)

Letting p y be the row vector of state resident probabilities \( \left( {p_{i}^{y} ,\, \ldots ,\,p_{n}^{y} } \right) \), this relationship may be written

$$ \varvec{p}^{{{(y} + 1)}} = \varvec{p}^{{({y})}} \varvec{P} $$
(6.120)

To calculate the probabilities of each streamflow state in year y + 2, one can use p (y+1) in Eq. 6.120 to obtain p (y+2) = p (y+1) P or p (y+2) = p y P 2

Continuing in this matter, it is possible to compute the probabilities of each possible streamflow state for years y + 1, y + 2, y + 3, …, y + k, … as

$$ \varvec{p}^{{({y } + { k})}} = \varvec{p}^{y} P^{{\mathbf{k}}} $$
(6.121)

Returning to the four-state example in Fig. 6.11, assume that the flow in year y is in the interval represented by q 2. Hence in year y the unconditional streamflow probabilities \( p_{i}^{y} \) are (0, 1, 0, 0). Knowing each \( p_{i}^{y} \), the probabilities \( p_{j}^{y + 1} \) corresponding to each of the four streamflow states can be determined. From Fig. 6.11, the probabilities \( p_{j}^{y + 1} \) are 0.2, 0.4, 0.3, and 0.1 for j = 1, 2, 3, and 4, respectively. The probability vectors for nine future years are listed in Table 6.8.

Table 6.8 Successive streamflow probabilities based on transition probabilities in Fig. 6.11

As time progresses, the probabilities generally reach limiting values. These are the unconditional or steady-state probabilities. The quantity p i has been defined as the unconditional probability of q i . These are the steady-state probabilities which p (y+k) approaches for large k. It is clear from Table 6.8 that as k becomes larger, Eq. 6.119 becomes

$$ p_{j} = \sum\limits_{i = 1}^{n} {p_{i} p_{i j} } $$
(6.122)

or in vector notation, Eq. 6.122 becomes

$$ \varvec{p} = \varvec{p}\varvec{P} $$
(6.123)

where p is the row vector of unconditional probabilities (p 1, …, p n ). For the example in Table 6.8, the probability vector p equals (0.156, 0.309, 0.316, 0.219).

The steady-state probabilities for any Markov chain can be found by solving simultaneous Eqs. 6.123 for all but one of the states j together with the constraint

$$ \sum\limits_{i = 1}^{n} {p_{i} = 1} $$
(6.124)

Annual streamflows are seldom as highly correlated as the flows in this example. However, monthly, weekly, and especially daily streamflows generally have high serial correlations. Assuming that the unconditional steady-state probability distributions for monthly streamflows are stationary, a Markov chain can be defined for each month’s steamflow. Since there are 12 months in a year, there would be 12 transition matrices, the elements of which could be denoted as \( p_{ij}^{t} \). Each defines the probability of a streamflow \( p_{j}^{t + 1} (y) \) in month t + 1, given a streamflow \( p_{i}^{t} (y) \) in month t. The steady-state stationary probability vectors for each month can be found by the procedure outlined above, except that now all 12 matrices are used to calculate all 12 steady-state probability vectors. However, once the steady-state vector p is found for one month, the others are easily computed using Eq. 6.121 with t replacing y.

6.7.3 Properties of Time Series Statistics

The statistics most frequently used to describe the distribution of a continuous-state stationary stochastic process are the sample mean, variance , and various autocorrelations . Statistical dependence among the observations, as is frequently the case in time series , can have a marked effect on the distribution of these statistics. This part of Sect. 6.7 reviews the sampling properties of these statistics when the observations are a realization of a stochastic process .

The sample mean

$$ \overline{X} = \frac{1}{n}\sum\limits_{i = 1}^{n} {X_{i} } $$
(6.125)

when viewed as a random variable is an unbiased estimate of the mean of the process μ X , because

$$ E[\overline{X}] = \frac{1}{n}\sum\limits_{i = 1}^{n} {E\left[ {X_{i} } \right]} = \mu_{X} $$
(6.126)

However, correlation among the X i ’s, so that ρ X (k) ≠ 0 for k > 0, affects the variance of the estimated mean \( \overline{X} \).

$$ \begin{aligned} {\text{Var}}\left( {\overline{X}} \right) & = E\left[ {\left( {\overline{X} - \mu_{X} } \right)^{2} } \right] \\& = \frac{1}{{n^{2} }}E\left\{ {\sum\limits_{t = 1}^{n} {\sum\limits_{s = 1}^{n} {\left( {X_{t} - \mu_{X} } \right)\left( {X_{s} - \mu_{X} } \right)} } } \right\} \\ & = \frac{{\sigma_{X}^{2} }}{n}\left\{ {1 + 2\sum\limits_{k = 1}^{n - 1} {\left( {1 - \frac{k}{n}} \right)\rho_{X} \left( k \right)} } \right\} \\ \end{aligned} $$
(6.127)

The variance of \( \overline{X} \), equal to \( \sigma_{X}^{2} /n \) for independent observations, is inflated by the factor within the brackets. For ρ X (k) ≥ 0, as is often the case, this factor is a non-decreasing function of n, so that the variance of \( \overline{X} \) is inflated by a factor whose importance does not decrease with increasing sample size. This is an important observation, because it means the average of a correlated time series will be less precise than the average of a sequence of independent random variables of the same length with the same variance.

A common model of stochastic series has

$$ {\rho }_{X} (k) = [{\rho }_{X} (1)]^{k} = {\rho }^{k} $$
(6.128)

This correlation structure arises from the autoregressive Markov model discussed at length in Sect. 6.8. For this correlation structure

$$ {\text{Var}}\left( {\overline{X}} \right) = \frac{{\sigma_{X}^{2} }}{n}\left\{ {1 + \frac{2\rho }{n}\frac{{\left[ {n\left( {1 - \rho } \right) - \left( {1 - \rho^{n} } \right)} \right]}}{{\left( {1 - \rho } \right)^{2} }}} \right\} $$
(6.129)

Substitution of the sample estimates for \( \sigma_{X}^{2} \) and ρ X (1) in the equation above often yields a more realistic estimate of the variance of \( \overline{X} \) than does the estimate \( s_{X}^{2} /n \) if the correlation structure ρ X (k) = ρ k is reasonable; otherwise, Eq. 6.127 may be employed. Table 6.9 illustrates the affect of correlation among the X t values on the standard error of their mean , equal to the square root of the variance in Eq. 6.127.

Table 6.9 Standard error of \( \overline{X} \) when σ x  = 0.25 and ρ X (k) = ρ k

The properties of the estimate of the variance of X,

$$ \hat{\sigma }_{X}^{2} = v_{X}^{2} = \frac{1}{n}\sum\limits_{t = 1}^{n} {\left( {X_{t} - \overline{X}} \right)}^{2} $$
(6.130)

are also affected by correlation among the X t ’s. Here v rather than s is used to denote the variance estimator because n is employed in the denominator rather than n − 1. The expected value of \( v_{x}^{2} \) becomes

$$ E\left[ {v_{X}^{2} } \right] = \sigma_{X}^{2} \left\{ {1 - \frac{1}{n} - \frac{2}{n}\sum\limits_{k = 1}^{n - 1} {\left( {1 - \frac{k}{n}} \right) \rho_{X} \left( k \right)} } \right\} $$
(6.131)

The bias in \( v_{X}^{2} \) depends on terms involving ρ X (1) through ρ X (n − 1). Fortunately, the bias in \( v_{X}^{2} \) decreases with n and is generally unimportant when compared to its variance.

Correlation among the X t ’s also affects the variance of \( v_{X}^{2} \). Assuming that X has a normal distribution (here the variance of \( v_{X}^{2} \) depends on the fourth moment of X), the variance of \( v_{X}^{2} \) for large n is approximately (Kendall and Stuart 1966, Sect. 48.1).

$$ {\text{Var}}\left( {v_{X}^{2} } \right) \cong 2\frac{{\sigma_{X}^{4} }}{n}\left\{ {1 + 2\sum\limits_{k = 1}^{\infty } {\rho_{X}^{2} \left( k \right)} } \right\} $$
(6.132)

where for ρ X (k) = ρ k, Eq. 6.132 becomes

$$ {\text{Var}}\left( {v_{X}^{2} } \right) \cong 2\frac{{\sigma_{X}^{4} }}{n}\left( {\frac{{1 + \rho^{2} }}{{1 - \rho^{2} }}} \right) $$
(6.133)

Like the variance of \( \overline{X} \), the variance of \( v_{X}^{2} \) is inflated by a factor whose importance does not decrease with n. This is illustrated by Table 6.10 that gives the standard deviation of \( v_{X}^{2} \) divided by the true variance \( \sigma_{X}^{2} \) as a function of n and ρ when the observations have a normal distribution and ρ X (k) = ρ k. This would be the coefficient of variation of \( v_{X}^{2} \) were it not biased.

Table 6.10 Standard deviation of (\( v_{X}^{2}/\sigma_{X}^{2}) \) when observations have a normal distribution and ρ X (k) = ρ k

A fundamental problem of time series analyses is the estimation or description of the relationship between the random variable at different times. The statistics used to describe this relationship are the autocorrelations . Several estimates of the autocorrelations have been suggested; a simple and satisfactory estimate recommended by Jenkins and Watts (1968) is

$$ \hat{\rho }_{X} \left( k \right) = r_{k} = \frac{{\sum\nolimits_{t = 1}^{n - k} {\left( {x_{t} - \bar{x}} \right)\left( {x_{t + k} - \bar{x}} \right)} }}{{\sum\nolimits_{t = 1}^{n} {\left( {x_{t} - \bar{x}} \right)^{2} } }} $$
(6.134)

Here, r k is the ratio of two sums where the numerator contains n − k terms and the denominator contains n terms. The estimate r k is biased, but unbiased estimates frequently have larger mean square errors (Jenkins and Watts 1968). A comparison of the bias and variance of r 1 is provided by the case when the X t ’s are independent normal variates. Then (Kendall and Stuart 1966)

$$ E\left[ {r_{1} } \right] = - \frac{1}{n} $$
(6.135a)

and

$$ {\text{Var}}\left( {r_{1} } \right) = \frac{{\left( {n - 2} \right)^{2} }}{{n^{2} \left( {n - 1} \right)}} \cong \frac{1}{n} $$
(6.135b)

For n = 25, the expected value of r 1 is −0.04 rather than the true value of zero; its standard deviation is 0.19. This results in a mean square error of (E[r 1])2 + Var(r 1) = 0.0016 + 0.0353 = 0.0369. Clearly, the variance of r 1 is the dominant term.

For X t values that are not independent, exact expressions for the variance of r k generally are not available. However, for normally distributed X t and large n (Kendall and Stuart 1966),

$$ \begin{aligned} {\text{Var}}\left( {r_{k} } \right) & \cong \frac{1}{n}\sum\limits_{l = - \infty }^{ + \infty } {\left[ {\rho_{x}^{2} \left( l \right)} \right.} + \rho_{x} \left( {l + k} \right)\rho_{x} \left( {l - k} \right) \\ & \quad - 4\rho_{x} \left( k \right)\rho_{x} \left( l \right)\rho_{x} \left( {k - l} \right) + 2\rho_{x}^{2} \left( k \right)\rho_{x}^{2} \left. {\left( l \right)} \right] \\ \end{aligned} $$
(6.136)

If ρ X (k) is essentially zero for k > q, then the simpler expression (Box et al. 1994)

$$ {\text{Var}}\left( {r_{k} } \right) \cong \frac{1}{n}\left[ {1 + 2\sum\limits_{t = 1}^{q} {\rho_{x}^{2} \left( l \right)} } \right] $$
(6.137)

is valid for r k corresponding to k > q; thus for large n, Var(r k ) ≥ l/n and values of r k will frequently be outside the range of ±1.65/\( \sqrt n \), even though ρ x (k) may be zero.

If ρ X (k) = ρ k, Eq. 6.137 reduces to

$$ {\text{Var}}\left( {{r}_{k} } \right) \cong \frac{1}{n}\left[ {\frac{{\left( {1 + \rho^{2} } \right)\left( {1 - \rho^{2k} } \right)}}{{1 - \rho^{2} }} - 2k\rho^{2k} } \right] $$
(6.138)

In particular for r l, this gives

$$ {\text{Var}}\left( {r_{1} } \right) \cong \frac{1}{n}\left( {1 - \rho^{2} } \right) $$
(6.139)

Approximate values of the standard deviation of r l for different values of n and ρ are given in Table 6.11.

Table 6.11 Approximate standard deviation of r 1 when observations have a normal distribution and ρ X (k) = ρ k

The estimates of r k and r k+j are highly correlated for small j; this causes plots of r k versus k to exhibit slowly varying cycles when the true values of ρ X (k) may be zero. This increases the difficulty of interpreting the sample autocorrelations .

6.8 Synthetic Streamflow Generation

6.8.1 Introduction

This section is concerned primarily with ways of generating sample data such as streamflows, temperatures, and rainfall that are used in water resource systems simulation studies (e.g., as introduced in the next section). The models and techniques discussed in this section can be used to generate any number of quantities used as inputs to simulation studies. For example Wilks (1998, 2002) discusses the generation of wet and dry days, rainfall depths on wet days, and associated daily temperatures . The discussion here is directed toward the generation of streamflows because of the historic development and frequent use of these models in that context (Matalas and Wallis 1976). In addition, they are relatively simple compared to more complete daily weather generators and many other applications. Generated streamflows have been called synthetic to distinguish them from historical observations (Fiering 1967). The field has been called stochastic hydrologic modeling. More detailed presentations can be found in Marco et al. (1989) and Salas (1993).

River basin simulation studies can use many sets of streamflow, rainfall, evaporation , and/or temperature sequences to evaluate the statistical properties of the performance of alternative water resources systems. For this purpose, synthetic flows and other generated quantities should resemble, statistically, those sequences that are likely to be experienced during the planning period. Figure 6.12 illustrates how synthetic streamflow , rainfall, and other stochastic sequences are used in conjunction with projections of future demands and other economic data to determine how different system designs and operating policies might perform.

Fig. 6.12
figure 12

Structure of a simulation study, indicating the transformation of a synthetic streamflow sequence, future demands and a system design and operating policy into system performance statistics

Use of only the historical flow or rainfall record in water resource studies does not allow for the testing of alternative designs and policies against the range of sequences that are likely to occur in the future. We can be very confident that the future historical sequence of flows will not be the historical one, yet there is important information in that historical record. That information is not fully used if only the historical sequence is simulated. By fitting continuous distributions to the set of historical flows and then using those distributions to generate other sequences of flows, all of which are statistically similar and equally likely, gives one a broader range of inputs to simulation models . Testing designs and policies against that broader range of flow sequences that could occur more clearly identifies the variability and range of possible future performance indicator values. This in turn should lead to the selection of more robust system designs and policies.

The use of synthetic streamflows is particularly useful for water resource systems having large amounts of over -year storage . Use of only the historical hydrologic record in system simulation yields only one time history of how the system would operate from year to year. In water resource systems having relatively little storage so that reservoirs and/or groundwater aquifers refill almost every year, synthetic hydrologic sequences may not be needed if historical sequences of a reasonable length are available. In this second case, a 25-year historic record provides 25 descriptions of the possible within-year operation of the system. This may be sufficient for many studies.

Generally, use of stochastic sequences is thought to improve the precision with which water resource system performance indices can be estimated, and some studies have shown this to be the case (Vogel and Shallcross 1996; Vogel and Stedinger 1988). In particular, if the operation of the system and performance indices have thresholds and shape breaks, then the coarse description provided by historical series are likely to provide relative inaccurate estimates of the expected values of such statistics. For example, suppose that shortages only invoke a nonlinear penalty function on average one year in 20. Then in a 60-year simulation there is a 19% probability that the penalty will be invoked at most once, and an 18% probability it will be invoked five or more times. Thus the calculation of the annual average value of the penalty would be highly unreliable unless some smoothing of the input distributions is allowed associated with a long simulation analysis.

On the other hand, if one is only interested in the mean flow, or average benefits that are mostly a linear function of flows, then use of stochastic sequences will probably add little information to what is obtained simply by simulating the historical record. After all, the fitted models are ultimately based on the information provided in the historical record, and their use does not produce new information about the hydrology of the basin.

If in a general sense one has available N years of record, the statistics of that record can be used to build a stochastic model for generating thousands of years of flow. These synthetic data can now be used to estimate more exactly the system performance, assuming, of course, that the flow-generating model accurately represents nature. But the initial uncertainty in the model parameters resulting from having only N years of record would still remain (Schaake and Vicens 1980). An alternative is to run the historical record (if it is sufficient complete at every site and contains no gaps of missing data) through the simulation model to generate N years of output. That output series can be processed to produce estimates of system performance. So the question is: is it better to generate multiple input series based on uncertain parameter values and use those to determine average system performance with great precision, or is it sufficient to just model the N-year output series that results from simulation of the historical series?

The answer seems to depend upon how well behaved the input and output series are. If the simulation model is linear, it does not make much difference. If the simulation model were highly nonlinear, then modeling the input series would appear to be advisable. Or if one is developing reservoir operating policies, there is a tendency to make a policy sufficiently complex that it deals very well with the few droughts in the historical record but at the same time giving a false sense of security and likely misrepresenting the probability of system performance failures .

Another situation where stochastic data-generating models are useful is when one wants to understand the impact on system performance estimates of the parameter uncertainty stemming from short historical records. In that case, parameter uncertainty can be incorporated into streamflow generating models so that the generated sequences reflect both the variability that one would expect in flows over time as well as the uncertainty of the parameter values of the models that describe that variability (Valdes et al. 1977; Stedinger and Taylor 1982a, b; Stedinger Pei and Cohn 1985; Vogel and Stedinger 1988).

If one decides to use a stochastic data generator, the challenge is to use a model that appropriately describes the important relationships, but does not attempt to reproduce more relationships than are justified or that can be estimated with available data sets.

Two basic techniques are used for streamflow generation . If the streamflow population can be described by a stationary stochastic process, a process whose parameters do not change over time, and if a long historical streamflow record exists, then a stationary stochastic streamflow model may be fit to the historical flows. This statistical model can then generate synthetic sequences that describe selected characteristics of the historical flows. Several such models are discussed below.

The assumption of stationarity is not always plausible, particularly in river basins that have experienced marked changes in runoff characteristics due to changes in land cover, land use, climate, or the use of groundwater during the period of flow record. Similarly, if the physical characteristics of a basin will change substantially in the future, the historical streamflow record may not provide reliable estimates of the distribution of future unregulated flows. In the absence of the stationarity of streamflows or a representative historical record, an alternative scheme is to assume that precipitation is a stationary stochastic process and to route either historical or synthetic precipitation sequences through an appropriate rainfall-runoff model of the river basin.

6.8.2 Streamflow Generation Models

A statistical streamflow generation model is used to generate streamflow data that can supplement or replace historical streamflow data in various analyses requiring such data. If the past flow record is considered representative of what the future one might be, at least for a while, then the statistical characteristics of the historical flow record can be used as a basis for generating new flow data. While this may be a reasonable assumption in the near future, changing land uses and climate may lead to entirely different statistical characteristics of future streamflows, if not now, certainly in the more distant future. By then, improved global climate models (GCMs) and downscaling methods together with improved rainfall-runoff predictions given future land use scenarios may be a preferred way to generate future streamflows. This section of the chapter will focus on the use of historical records.

The first step in the construction of a statistical streamflow generating model based on historical flow records is to extract from the historical streamflow record the fundamental information about the joint distribution of flows at different sites and at different times. A streamflow model should ideally capture what is judged to be the fundamental characteristics of the joint distribution of the flows. The specification of what characteristics are fundamental is of primary importance.

One may want to model as closely as possible the true marginal distribution of seasonal flows and/or the marginal distribution of annual flows. These describe both how much water may be available at different times and also how variable is that water supply. Also, modeling the joint distribution of flows at a single site in different months, seasons, and years may be appropriate. The persistence of high flows and of low flows, often described by their correlation , affects the reliability with which a reservoir of a given size can provide a given yield (Fiering 1967; Lettenmaier and Burges 1977a, b; Thyer and Kuczera 2000). For multicomponent reservoir systems, reproduction of the joint distribution of flows at different sites and at different times will also be important.

Sometimes, a streamflow model is said to statistically resemble the historical flows if the streamflow model produces flows with the same mean, variance, skew coefficient, autocorrelations , and/or cross-correlations as were observed in the historic series. This definition of statistical resemblance is attractive because it is operational and requires that an analyst need only find a model that can reproduce the observed statistics. The drawback of this approach is that it shifts the modeling emphasis away from trying to find a good model of marginal distributions of the observed flows and their joint distribution over time and over space, given the available data, to just reproducing arbitrarily selected statistics. Defining statistical resemblance in terms of moments may also be faulted for specifying that the parameters of the fitted model should be determined using the observed sample moments, or their unbiased counterparts. Other parameter estimation techniques, such as maximum likelihood estimators, are often more efficient. Definition of resemblance in terms of moments can also lead to confusion over whether the population parameters should equal the sample moments, or whether the fitted model should generate flow sequences whose sample moments equal the historical values—the two concepts are different because of the biases (as discussed in Sect. 6.7) in many of the estimators of variances and correlations (Matalas and Wallis 1976; Stedinger 1980, 1981; Stedinger and Taylor 1982a).

For any particular river basin study, one must determine what streamflow characteristics need to be modeled. The decision should depend on what characteristics are important to the operation of the system being studied, the data available, and how much time can be spared to build and test a stochastic model . If time permits, it is good practice to see if the simulation results are in fact sensitive to the generation model and its parameter values using an alternative model and set of parameter values. If the model’s results are sensitive to changes, then, as always, one must exercise judgment in selecting the appropriate model and parameter values to use.

This section presents a range of statistical models for the generation of synthetic data. The necessary sophistication of a data-generating model depends on the intended use of the data. Section 6.8.3 below presents the simple autoregressive Markov model for generating annual flow sequences. This model alone is too simple for many practical studies, but is useful for illustrating the fundamentals of the more complex models that follow. Therefore, considerable time is spent exploring the properties of this basic model.

Subsequent sections discuss how flows with any marginal distribution can be produced and present models for generating sequences of flows that can reproduce the persistence of historical flow sequences. Other parts of this section present models to generate concurrent flows at several sites and to generate seasonal or monthly flows while preserving the characteristics of annual flows. For those wishing to study synthetic streamflow models in greater depth more advanced material can be found in Marco et al. (1989) and Salas (1993).

6.8.3 A Simple Autoregressive Model

A simple model of annual streamflows is the autoregressive Markov model. The historical annual flows q y are thought of as a particular value of a stationary stochastic process Q y . The generation of annual streamflows and other variables would be a simple matter if annual flows were independently distributed. In general, this is not the case and a generating model for many phenomena should capture the relationship between values in different years or in different periods. A common and reasonable assumption is that annual flows are the result of a first-order Markov process .

Assume also that annual streamflows are normally distributed. In some areas, the distribution of annual flows is in fact nearly normal. Streamflow models that produce nonnormal streamflows are discussed as an extension of this simple model.

The joint normal density function of two streamflows Q y and Q w in years y and w having mean μ, variance σ 2, and year-to-year correlation ρ between flows is

$$ \begin{aligned} f\left( {q_{y} , q_{w} } \right) & = \dfrac{1}{{2\pi \sigma^{2} \left( {1 - \rho^{2} } \right)^{0.5} }} \\& \quad \cdot {\exp} \left[ {\frac{{\left( {q_{y} - \mu } \right)^{2} - 2\rho \left( {q_{y} - \mu } \right)\left( {q_{w} - \mu } \right) + \left( {q_{w} - \mu } \right)^{2} }}{{2\sigma^{2} \left( {1 - \rho^{2} } \right)}}} \right]\end{aligned} $$
(6.140)

The joint normal distribution for two random variables with the same mean and variance depend only on their common mean μ, variance σ 2, and the correlation ρ between the two (or equivalently the covariance ρσ 2).

The sequential generation of synthetic streamflows requires the conditional distribution of the flow in one year given the value of the flows in previous years. However, if the streamflows are a first-order (lag 1) Markov process , then the dependence of the distribution of the flow in year y + 1 on flows in previous years depends entirely on the value of the flow in year y. In addition, if the annual streamflows have a multivariate normal distribution, then the conditional distribution of Q y+1 is normal with mean and variance

$$ \begin{aligned} & {E}[Q_{{{y} + 1}} |Q_{y} = q_{y} ] = \mu + \rho (q_{y} - \mu ) \\ & {\text{Var}}(Q_{{{y} + 1}} |Q_{y} = q_{y} ) = \sigma^{2} (1 - \rho^{2} ) \\ \end{aligned} $$
(6.141)

where q y is the value of Q y in year y. Notice that the larger the absolute value of the correlation ρ between the flows, the smaller the conditional variance of Q y+1, which in this case does not depend at all on the value q y .

Synthetic normally distributed streamflows that have mean μ, variance σ 2, and year-to-year correlation ρ, are produced by the model

$$ Q_{{{y} + 1}} = \mu + \rho (Q_{y} - \mu ) + V_{y} {\sigma }\sqrt {1 - \rho^{2} } $$
(6.142)

where V y is a standard normal random variable, meaning that it has zero mean, E[V y ] = 0, and unit variance, \( E\left[ {V_{y}^{2} } \right] = 1 \). The random variable V y is added here to provide the variability in Q y+1 that remains even after Q y is known. By construction, each V y is independent of past flows Q w where w ≤ y, and V y is independent of V w for w ≠ y. These restrictions imply that

$$ {E}\left[ {V_{w} V_{y} } \right] = 0\quad w \ne y $$
(6.143)

and

$$ {E}[(Q_{w} - \mu )V_{y} ] = 0\quad w \le y $$
(6.144)

Clearly, Q y+1 will be normally distributed if both Q y and V y are normally distributed because sums of independent normally distributed random variables are normally distributed.

It is a straightforward procedure to show that this basic model indeed produces streamflows with the specified moments, as demonstrated below.

Using the fact that E[V y ] = 0, the conditional mean of Q y+1 given that Q y equals q y is

$$ {E}[Q_{{{y} + 1}} |q_{y} ] = {E}[\mu + {\rho }(q_{y} - \mu ) + V_{y} {\sigma }\sqrt {1 - \rho^{2} } ] = \mu + \rho (q_{y} - \mu ) $$
(6.145)

Since E{V 2 y ] = Var[V y ] = 1, the conditional variance of Q y+1 is

$$ \begin{aligned} {\text{Var}}[Q_{{{y} + 1}} |q_{y} ] & = {E}[\{ Q_{{{y} + 1}} - {E}[Q_{{{y} + 1}} |q_{y} ]\}^{2} |q_{y} ] \\ & = {E}[\{ \mu + \rho (q_{y} - \mu ) + V_{y} {\sigma }\sqrt {1 - \rho^{2} }\\& \quad - [\mu + \rho (q_{y} - \mu )]\}^{2} \\ & = {E}[V_{y}{\sigma }\sqrt {1 - \rho^{2} } ]^{2} = {\sigma }^{2} (1 - \rho^{2} )] \\ \end{aligned} $$
(6.146)

Thus this model produces flows with the correct conditional mean and variance .

To compute the unconditional mean of Q y+1 one first takes the expectation of both sides of Eq. 6.142 to obtain

$$ \begin{aligned} {E}\left[ {Q_{{{y} + 1}} } \right] & = \mu + \rho ({E}\left[ {Q_{y} } \right] - \mu ) \\& \quad + {E}\left[ {V_{y} } \right]{\sigma }\sqrt {1 - \rho^{2} }\end{aligned} $$
(6.147)

where E[V y ] = 0. If the distribution of streamflows is independent of time so that for all y, E[Q y+1] = E[Q y ] = E[Q], it is clear that (1 − ρ) E[Q] = (1 − ρ)μ or

$$ {E}[{Q}] = {\mu } $$
(6.148)

Alternatively, if Q y for y = 1 has mean μ, then Eq. 6.147 indicates that Q 2 will have mean μ. Thus repeated application of the Eq. 6.147 would demonstrate that all Q y for y > 1 have mean μ.

The unconditional variance of the annual flows can be derived by squaring both sides of 6.142 to obtain

$$ \begin{aligned} {E}[(Q_{{{y} + 1}} - \mu )^{2} ] & = {E}[\{ {\rho }(Q_{y} - \mu ) + V_{y} {\sigma }\sqrt {1 - \rho^{2} } \}^{2} ] \\ & = {\rho }^{2} {E}[(Q_{y} - \mu )^{2} ] + 2\rho \sigma \sqrt {1 - \rho^{2} } {E}[(Q_{y} - \mu )V_{y} ] \\& \quad + {\sigma }^{2} (1 - \rho^{2} ){E}\left[ {V_{y}^{2} } \right] \\ \end{aligned} $$
(6.149)

Because V y is independent of Q y (Eq. 6.144), the second term on the right-hand side of Eq. 6.149 vanishes. Hence the unconditional variance of Q satisfies

$$ {E}[(Q_{{{y} + 1}} - \mu )^{2} ] = {\rho }^{2} {E}[(Q_{y} - \mu )^{2} ] + {\sigma }^{2} (1 - {\rho }^{2} ) $$
(6.150)

Assuming that Q y+1 and Q y have the same variance yields

$$ {E}[(Q - \mu )^{2} ] = {\sigma }^{2} $$
(6.151)

so that the unconditional variance is σ 2, as required.

Again, if one does not want to assume that Q y+1 and Q y have the same variance, a recursive argument can be adopted to demonstrate that if Q 1 has variance σ 2, then Q y for y ≥ 1 has variance σ2.

The covariance of consecutive flows is another important issue. After all the whole idea of building these time series models is to describe the year-to-year correlation of the flows. Using Eq. 6.142 one can compute that the covariance of consecutive flows must be.

$$ \begin{aligned} {E}[(Q_{{{y} + 1}} - \mu )(Q_{y} - \mu )] & = {E}\{ [\rho (Q_{y} - \mu ) \\& \quad + V_{y} {\sigma }\sqrt {1 - \rho^{2} } ](Q_{y} - \mu )\} \\ & = \rho {E}[(Q_{y} - \mu )^{2} ] = \rho {\sigma }^{2} \\ \end{aligned} $$
(6.152)

where E[(Q y  − μ)V y ] = 0 because V y and Q y are independent (Eq. 6.144).

Over a longer time scale, another property of this model is that the covariance of flows in year y and y + k is

$$ {E}[(Q_{{{y} + {k}}} - \mu )(Q_{y} - \mu )] = {\rho }^{k} {\sigma }^{2} $$
(6.153)

This equality can be proven by induction. It has already been shown for k = 0 and 1. If it is true for k = j − 1, then

$$ \begin{aligned} {E}[(Q_{{{y} + {j}}} - \mu )(Q_{y} - \mu )] & = {E}\{ [\rho(Q_{{{y} + {j} - 1}} - \mu ) \\& \quad + V_{{{y} + {j} - 1}} {\sigma }\sqrt {1 - \rho^{2} } ](Q_{y} - \mu )\} \\ & = \rho {E}[(Q_{y} - \mu )](Q_{{{y} + {j} - 1}} - \mu )] \\& = \rho [\rho^{{{j} - 1}} {\sigma }^{2} ] = \rho^{j} {\sigma }^{2} \\ \end{aligned} $$
(6.154)

where E[(Q y  − μ)V y+j−1] = 0 for j ≥ 1. Hence Eq. 6.153 is true for any value of k.

It is important to note that the results in Eqs. 6.145 to 6.153 do not depend on the assumption that the random variables Q y and V y are normally distributed. These relationships apply to all autoregressive Markov processes of the form in Eq. 6.142 regardless of the distributions of Q y and V y . However, if the flow Q y in year y = 1 is normally distributed with mean μ and variance σ 2, and if the V y are independent normally distributed random variables with mean zero and unit variance, then the generated Q y for y ≥ 1 will also be normally distributed with mean μ and variance σ 2. The next section considers how this and other models can be used to generate streamflows that have other than a normal distribution.

6.8.4 Reproducing the Marginal Distribution

Most models for generating stochastic processes deal directly with normally distributed random variables. Unfortunately, flows are not always adequately described by the normal distribution. In fact, streamflows and many other hydrologic data cannot really be normally distributed because of the impossibility of negative values. In general, distributions of hydrologic data are positively skewed having a lower bound near zero and, for practical purposes, an unbounded right-hand tail. Thus they look like the gamma or lognormal distribution illustrated in Figs. 6.3 and 6.4.

The asymmetry of a distribution is often measured by its coefficient of skewness . In some streamflow models, the skew of the random elements V y is adjusted so that the models generate flows with the desired mean, variance, and skew coefficient. For the autoregressive Markov model for annual flows

$$ \begin{aligned} {E}[(Q_{{{y} + 1}} - \mu )^{3} ] & = {E}\left[ {\rho (Q_{y} - \mu ) + V_{y} {\sigma }\sqrt {1 - \rho^{2} } } \right]^{3} \\ & = {\rho }^{3} {E}[(Q_{y} - \mu )^{3} ] \\& \quad + {\sigma }^{3} (1 - \rho^{2} )^{3/2} {E}\left[ {V_{y}^{3} } \right] \\ \end{aligned} $$
(6.155)

so that

$$ \gamma_{Q} = \frac{{E[(Q - \mu )^{3} ]}}{{\sigma^{3} }} = \frac{{(1 - \rho^{2} )^{3/2} }}{{1 - \rho^{3} }} $$
(6.156)

By appropriate choice of the skew of V y , the desired skew coefficient of the annual flows can be produced. This method has often been used to generate flows that have approximately a gamma distribution using V y ’s with a gamma distribution and the required skew. The resulting approximation is not always adequate (Lettenmaier and Burges 1977a).

The alternative and generally preferred method is to generate normal random variables and then transform these variates to streamflows with the desired marginal distribution. Common choices for the distribution of streamflows are the two-parameter and three-parameter lognormal distributions or a gamma distribution. If Q y is a lognormally distributed random variable , then

$$ Q_{y} = {\tau } + \exp (X_{y} ) $$
(6.157)

where X y is a normal random variable; when the lower bound τ is zero, Q y has a two-parameter lognormal distribution. Equation 6.157 transforms the normal variates X y into lognormally distributed streamflows. The transformation is easily inverted to obtain

$$ X_{y} = \ln (Q_{y} - {\tau })\quad{\text{for}}\,Q_{y} \ge {\tau } $$
(6.158)

where Q y must be greater than its lower bound τ.

The mean, variance , skewness of X y and Q y are related by the formulas (Matalas 1967)

$$ \begin{aligned} \mu_{Q} & = {\tau } + \exp (\mu_{X} + \frac{1}{2}\sigma_{X}^{2} ) \\ \sigma_{Q}^{2} & = \exp (2\mu_{X} + \sigma_{X}^{2} )[\exp (\sigma_{X}^{2} ) - 1] \\ {\gamma }_{Q} & = \frac{{\exp(3\sigma_{X}^{2} ) - 3 \exp(\sigma_{X}^{2} ) + 2}}{{[\exp(\sigma_{X}^{2} ) - 1]^{3/2} }} \\ \end{aligned} $$
(6.159)

If normal variates \( X_{y}^{s} \; {\text{and }}\;X_{y}^{u} \) are used to generate lognormally distributed streamflows \( Q_{y}^{s} \;{\text{and}}\;Q_{y}^{u} \) at sites s and u, then the lag-k correlation of the Q y ’s, denoted ρ Q (k; s, u), is determined by the lag-k correlation of the X variables, denoted ρ X (k; s, u), and their variances \( \sigma_{X}^{2} ({s}) \) and \( \sigma_{X}^{2} ({u}) \), where

$$ \rho_{Q} ({k;s, u}) = \frac{{\exp [\rho_{X} ({k}; {s}, {u})\sigma_{X} ({s}) \sigma_{X} ({u})] - 1}}{{\left\{ {\exp [\sigma_{X}^{2} ({s})] - 1} \right\}^{1/2} \left\{ {\exp [\sigma_{X}^{2} ({u})] - 1} \right\}^{1/2} }} $$
(6.160)

The correlations of the \( {X}_{y}^{s} \) can be adjusted, at least in theory, to produce the observed correlations among the \( {Q}_{y}^{s} \) variates. However, more efficient estimates of the true correlation of the \( {\text{Q}}_{\text{y}}^{\text{s}} \) values are generally obtained by transforming the historical flows \( {q}_{y}^{s} \) into their normal equivalent \( {x}_{y}^{s} = \ell {n}({q}_{y}^{s} - \tau ) \) and using the historical correlations of these \( {x}_{y}^{s} \) values as estimators of ρ X (k; s, u) (Stedinger 1981).

Some insight into the effect of this logarithmic transformation can be gained by considering the resulting model for annual flows at a single site. If the normal variates follow the simple autoregressive Markov model

$$ X_{{{y} + 1}} - {\mu } = {\rho }_{X} (X_{y} - \mu ) + V_{y} {\sigma }_{X} \sqrt {1 - \rho_{X}^{2} } $$
(6.161)

then the corresponding Q y follow the model (Matalas 1967)

$$ Q_{{{y} + 1}} = {\tau } + D_{y} \{ \exp [\mu_{x} (1 - {\rho }_{X} )]\} (Q_{y} - {\tau })^{{\rho_{X} }} $$
(6.162)

where

$$ D_{y} = \exp [(1 - \rho_{X}^{2} )^{1/2} {\sigma }_{X} V_{y} ] $$
(6.163)

The conditional mean and standard deviation of Q y+1 given that Q y  = q y now depend on \( (q_{y} - {\tau })^{{\rho_{X} }} \). Because the conditional mean of Q y+1 is no longer a linear function of q y , the streamflows are said to exhibit differential persistence : low flows are now more likely to follow low flows than high flows are to follow high flows. This is a property often attributed to real streamflow distributions. Models can be constructed to capture the relative persistence of wet and dry periods (Matalas and Wallis 1976; Salas 1993; Thyer and Kuczera 2000). Many weather generators for precipitation and temperature natural include such differences by employing a Markov chain description of the occurrence of wet and dry days (Wilks 1998).

6.8.5 Multivariate Models

If long concurrent streamflow records can be constructed at the several sites at which synthetic streamflows are desired, then ideally a general multisite streamflow model could be employed. O’Connell (1977), Ledolter (1978), Salas et al. (1980) and Salas (1993) discuss multivariate models and parameter estimation. Unfortunately, model identification (parameter value estimation) is very difficult for the general multivariate models.

This section illustrates how the basic univariate annual flow model in Sect. 8.3 can be generalized to the multivariate case. This exercise reveals how easily multivariate models can be constructed to reproduce specified variances and covariances of the flow vectors of interest , or some transformation of those values. This multisite generalization of the annual AR(1) or autoregressive Markov model follows the approach taken by Matalas and Wallis (1976). This general approach can be further extended to multisite/multiseason modeling procedures, as is done in the next section employing what have been called disaggregation models. However, while the size of the model matrices and vectors increases, the models are fundamentally the same from a mathematical viewpoint. Hence this section starts with the simpler case of an annual flow model.

For simplicity of presentation and clarity, vector notation is employed. Let \( {\mathbf{Z}}_{y} = (Z_{y}^{1} ,\, \ldots \,,Z_{y}^{n} )^{\text{T}} \) be the column vector of transformed zero-mean annual flows at sites s = 1, 2, …, n, so that

$$ {E}[{Z}_{y}^{s} ] = 0 $$
(6.164)

In addition, let \( {\mathbf{V}}_{y} = \left( {V_{y}^{1} ,\, \ldots ,\,V_{y}^{n} } \right)^{\text{T}} \) be a column vector of standard normal random variables, where \( {V}_{y}^{s} \) is independent of \( {V}_{w}^{r} \) for (r, w) ≠ (s, y) and independent of past flows \( Z_{w}^{r} \) where y ≥ w. The assumption that the variables have zero mean implicitly suggests that the mean value has already been subtracted from all the variables. This makes the notation simpler and eliminates the need to include a constant term in the models. With all the variables having zero mean, one can focus on reproducing the variances and covariances of the vectors included in a model.

A sequence of synthetic flows can be generated by the model

$$ {\mathbf{Z}}_{{{y} + 1}} = {\mathbf{AZ}}_{y} + {\mathbf{BV}}_{y} $$
(6.165)

where A and B are (n × n) matrices whose elements are chosen to reproduce the lag 0 and lag 1 cross-covariances of the flows at each site. The lag 0 and lag 1 covariances and cross-covariances can most economically be manipulated by use of the two matrices S 0 and S 1; the lag-zero covariance matrix, denoted S 0, is defined as

$$ {\mathbf{S}}_{0} = {E}[{\mathbf{Z}}_{y} {\mathbf{Z}}_{y}^{\text{T}} ] $$
(6.166)

and has elements

$$ {S}_{0} (i,j) = {E}\left[ {{{Z}}_{y}^{i} {{Z}}_{y}^{j} } \right] $$
(6.167)

The lag-one covariance matrix, denoted S 1, is defined as

$$ {\mathbf{S}}_{1} = {E}[{\mathbf{Z}}_{y + 1} {\mathbf{Z}}_{y}^{\text{T}} ] $$
(6.168)

and has elements

$$ {S}_{1} (i,j) = {E}\left[ {{{Z}}_{y + 1}^{i} {{Z}}_{y}^{j} } \right] $$
(6.169)

The covariances do not depend on y because the streamflows are assumed to be stationary.

Matrix S 1 contains the lag 1 covariances and lag 1 cross-covariances. S 0 is symmetric because the cross covariance S0(i, j) equals S0(j, i). In general, S 1 is not symmetric.

The variance–covariance equations that define the values of A and B in terms of S 0 and S 1 are obtained by manipulations of Eq. 6.165. Multiplying both sides of that equation by Z T y and taking expectations yields

$$ E\left[ {{\mathbf{Z}}_{y + 1} {\mathbf{Z}}_{y}^{\text{T}} } \right] = E\left[ {{\mathbf{AZ}}_{y} {\mathbf{Z}}_{y}^{\text{T}} } \right] + E\left[ {{\mathbf{BV}}_{y} {\mathbf{Z}}_{y}^{\text{T}} } \right] $$
(6.170)

The second term on the right-hand side vanishes because the components of Z y and V y are independent. Now the first term in Eq. 6.170, \( E\left[ {{\mathbf{AZ}}_{y} {\mathbf{Z}}_{y}^{\text{T}} } \right] \), is a matrix whose (i, j)th element equals

$$ E\left[ {\sum\limits_{k = 1}^{n} {a_{ik} Z_{y}^{k} Z_{y}^{i} } } \right]\quad = \quad \sum\limits_{{{k} = 1}}^{n} {a_{ik} {E}[Z_{y}^{k} Z_{y}^{i} ]} $$
(6.171)

The matrix with these elements is the same as the matrix \( {\mathbf{A}}{E}\left[ {{\mathbf{Z}}_{y} {\mathbf{Z}}_{y}^{\text{T}} } \right] \).

Hence, A—the matrix of constants—can be pulled through the expectation operator just as is done in the scalar case where E[aZ y  + b] = aE[Z y ] + b for fixed constants a and b.

Substituting S 0 and S 1 for the appropriate expectations in Eq. 6.170 yields

$$ {\mathbf{S}}_{1} = {\mathbf{AS}}_{ 0} \quad {\text{or}}\quad {\mathbf{A}} = {\mathbf{S}}_{1} {\mathbf{S}}_{0}^{ - 1} $$
(6.172)

A relationship to determine the matrix B is obtained by multiplying both sides of Eq. 6.165 by its own transpose (this is equivalent to squaring both sides of the scalar equation a = b) and taking expectations to obtain

$$ \begin{aligned} {E}[{\mathbf{Z}}_{y + 1} {\mathbf{Z}}_{y + 1}^{\text{T}} ] & = {E}[{\mathbf{AZ}}_{y} {\mathbf{Z}}_{y}^{\text{T}} {\mathbf{A}}^{\text{T}} ] + {E}[{\mathbf{AZ}}_{y} {\mathbf{V}}_{y}^{\text{T}} {\mathbf{B}}^{\text{T}} ] \\ & \quad + {E}\left[ {{\mathbf{BV}}_{y} {\mathbf{Z}}_{y} {\mathbf{A}}^{\text{T}} } \right] + {E}[{\mathbf{BV}}_{y} {\mathbf{V}}_{y}^{\text{T}} {\mathbf{B}}^{\text{T}} ] \\ \end{aligned} $$
(6.173)

The second and third terms on the right-hand side of Eq. 6.173 vanish because the components of Z y and V y are independent and have zero mean. \( {E}\left[ {{\mathbf{V}}_{y} {\mathbf{V}}_{y}^{T} } \right] \) equals the identity matrix because the components of V y are independently distributed with unit variance. Thus

$$ {\mathbf{S}}_{0} = {\mathbf{AS}}_{0} {\mathbf{A}}^{\text{T}} + {\mathbf{BB}}^{\text{T}} $$
(6.174)

Solving of the B matrix one finds that it should satisfy

$$ {\mathbf{BB}}^{\text{T}} = {\mathbf{S}}_{0} - {\mathbf{AS}}_{0} {\mathbf{A}}^{\text{T}} = {\mathbf{S}}_{0} - {\mathbf{S}}_{1} {\mathbf{S}}_{0}^{ - 1} {\mathbf{S}}_{1}^{\text{T}} $$
(6.175)

The last equation results from substitution of the relationship for A given in Eq. 6.172 and the fact that S 0 is symmetric; hence \( {\mathbf{S}}_{0}^{ - 1} \) is symmetric.

It should not be too surprising that the elements of B are not uniquely determined by Eq. 6.175. The components of the random vector V y may be combined in many ways to produce the desired covariances as long as B satisfies Eq. 6.175. A lower triangular matrix that satisfies Eq. 6.175 can be calculated by Cholesky decomposition (Young 1968; Press et al. 1986).

Matalas and Wallis (1976) call Eq. 6.165 the lag-1 model. They did not call the lag-1 model a Markov model because the streamflows at individual sites do not have the covariances of an autoregressive Markov process given in Eq. 6.153. They suggest an alternative model they call the Markov model. It has the same structure as the lag-1 model except it does not preserve the lag-1 cross-covariances. By relaxing this requirement, they obtain a simpler model with fewer parameters that generates flows that have the covariances of an autoregressive Markov process at each site. In their Markov model, the new A matrix is simply a diagonal matrix whose diagonal elements are the lag-1 correlations of flows at each site

$$ {\mathbf{A}} = {\text{diag}}[{\rho }\left( {1;i,i} \right)] $$
(6.176)

where ρ(1; i, i) is the lag-one correlation of flows at site i.

The corresponding B matrix depends on the new A matrix and S 0, where as before

$$ {\mathbf{BB}}^{\text{T}} = {\mathbf{S}}_{0} - {\mathbf{AS}}_{0} {\mathbf{A}}^{\text{T}} $$
(6.177)

The idea of fitting time series models to each site separately and then correlating in innovations in those separate models to reproduce the cross-correlation between the series is a very general and powerful modeling idea that has seen a number of applications with different time series models (Matalas and Wallis 1976; Stedinger et al. 1985; Camacho et al. 1985; Salas 1993).

6.8.6 Multiseason, Multisite Models

In most studies of surface water systems it is necessary to consider the variations of flows within each year. Streamflows in most areas have within-year variations, exhibiting wet and dry periods. Similarly, water demands for irrigation, municipal, and industrial uses also vary, and the variations in demand are generally out of phase with the variation in within-year flows; more water is usually desired when streamflows are low and less is desired when flows are high. This increases the stress on water delivery systems and makes it all the more important that time series models of streamflows, precipitation and other hydrological variables correctly reproduce the seasonality of hydrological processes .

This section discusses two approaches to generating within-year flows. The first approach is based on the disaggregation of annual flows produced by an annual flow generator to seasonal flows. Thus the method allows for reproduction of both the annual and seasonal characteristics of streamflow series. The second approach generates seasonal flows in a sequential manner, as was done for the generation of annual flows. Thus the models are a direct generalization of the annual flow models already discussed.

6.8.6.1 Disaggregation Model

The disaggregation model proposed by Valencia and Schaake (1973) and extended by Mejia and Rousselle (1976) and Tao and Delleur (1976) allows for the generation of synthetic flows that reproduce statistics both at the annual level and at the seasonal level. Subsequent improvements and variations are described by Stedinger and Vogel (1984), Maheepala and Perera (1996), Koutsoyiannis and Manetas (1996) and Tarboton et al. (1998).

Disaggregation models can be used for either multiseason single-site or multisite streamflow generation . They represent a very flexible modeling framework for dealing with different time or spatial scales . Annual flows for the several sites in question or the aggregate total annual flow at several sites can be the input to the model (Grygier and Stedinger 1988). These must be generated by another model, such as those discussed in the previous sections. These annual flows or aggregated annual flows are then disaggregated to seasonal values.

Let \( {\mathbf{Z}}_{y} = \left( {{Z}_{y}^{1} ,\, \ldots ,\,{Z}_{y}^{N} } \right)^{\text{T}} \) be the column vector of N transformed normally distributed annual or aggregate annual flows for N separate sites or basins. Next let X y  = \( \left( {X_{1,y}^{1} ,\, \ldots ,\,X_{T,y}^{1}, \,X_{1,y}^{2} ,\, \ldots ,\,X_{Ty}^{2} ,\, \ldots ,\,X_{1y}^{n} ,\, \ldots ,\,X_{Ty}^{n} } \right)^{T} \) be the column vector of nT transformed normally distributed seasonal flows \( {X}_{ty}^{s} \) for season t, year y, and site s.

Assuming that the annual and seasonal series, \( {Z}_{y}^{s} \;{\text{ and }}\;{X}_{ty}^{s} \), have zero mean (after the appropriate transformation), the basic disaggregation model is

$$ {\mathbf{X}}_{y} = {\mathbf{AZ}}_{y} + {\mathbf{BV}}_{y} $$
(6.178)

where V y is a vector of nT independent standard normal random variables , and A and B are, respectively, nT × N and nT × nT matrices. One selects values of the elements of A and B to reproduce the observed correlations among the elements of X y and between the elements of X y and Z y . Alternatively, one could attempt to reproduce the observed correlations of the untransformed flows as opposed to the transformed flows, although this is not always possible (Hoshi et al. 1978) and often produces poorer estimates of the actual correlations of the flows (Stedinger 1981).

The values of A and B are determined using the matrices \( {\mathbf{S}}_{zz} = {E}[ {\varvec{Z}_{y} \varvec{Z}_{y}^{\text{T}} } ] \), \( {\mathbf{S}}_{zz} = {E}[ {{\mathbf{Z}}_{y} {\mathbf{Z}}_{y}^{\text{T}} } ] \), \( {\mathbf{S}}_{xx} = {E}[ {{\mathbf{X}}_{y} {\mathbf{X}}_{y}^{\text{T}} } ] \), \( {\mathbf{S}}_{xz} = {E}[ {{\mathbf{X}}_{y} {\mathbf{Z}}_{y}^{\text{T}} } ] \), and \( {\mathbf{S}}_{zy} = {E}[ {{\mathbf{Z}}_{y} {\mathbf{X}}_{y}^{\text{T}} }] \) where S zz was called S 0 earlier. Clearly, \( {\mathbf{S}}_{xz}^{\text{T}} = {\mathbf{S}}_{zx} \). If S xz is to be reproduced, then by multiplying Eq. 6.178 on the right by \( {\mathbf{Z}}_{y}^{\text{T}} \) and taking expectations , one sees that A must satisfy

$$ {E}\left[ {{\mathbf{X}}_{y} {\mathbf{Z}}_{y}^{\text{T}} } \right] = {E}\left[ {{\mathbf{AZ}}_{y} {\mathbf{Z}}_{y}^{\text{T}} } \right] $$
(6.179)

or

$$ {\mathbf{S}}_{xz} = {\mathbf{AS}}_{zz} $$
(6.180)

Solving for the coefficient matrix A one obtains

$$ {\mathbf{A}} = {\mathbf{S}}_{xz} {\mathbf{S}}_{zz}^{ - 1} $$
(6.181)

To obtain an equation that determines the required value of the matrix B, one can multiply both sides of Eq. 6.178 by their transpose and take expectations to obtain

$$ {\mathbf{S}}_{xx} = {\mathbf{AS}}_{zz} {\mathbf{A}}^{\text{T}} + {\mathbf{BB}}^{\text{T}} $$
(6.182)

Thus to reproduce the covariance matrix S xx the B matrix must satisfy

$$ {\mathbf{BB}}^{\text{T}} = {\mathbf{S}}_{xx} - {\mathbf{AS}}_{zz} {\mathbf{A}}^{\text{T}} $$
(6.183)

Equations 6.181 and 6.183 for determining A and B are completely analogous to Eqs. 6.172 and 6.175 for the A and B matrices of the lag-1 models developed earlier. However, for the disaggregation model as formulated, BB T and hence the matrix B can actually be singular or nearly so (Valencia and Schaake 1973). This occurs because the real seasonal flows sum to the observed annual flows. Thus given the annual flow at a site and (T − 1) of the seasonal flows, the value of the unspecified seasonal flow can be determined by subtraction.

If the seasonal variables \( X_{ty}^{s} \) correspond to nonlinear transformations of the actual flows \( Q_{ty}^{s} \), then BB T is generally sufficiently non-singular that a B matrix can be obtained by Cholesky decomposition. On the other hand, when the model is used to generate values of \( X_{ty}^{s} \) to be transformed into synthetic flows \( Q_{ty}^{s} \), the constraint that these seasonal flows should sum to the given value of the annual flow is lost. Thus the generated annual flows (equal to the sums of the seasonal flows) will deviate from the values that were to have been the annual flows. Some distortion of the specified distribution of the annual flows results. This small distortion can be ignored, or each year’s seasonal flows can be scaled so that their sum equals the specified value of the annual flow (Grygier and Stedinger 1988). The latter approach eliminates the distortion in the distribution of the generated annual flows by distorting the distribution of the generated seasonal flows. Koutsoyiannis and Manetas (1996) improve upon the simple scaling algorithm by including a step that rejects candidate vectors X y if the required adjustment is too large and instead generates another vector X y . This reduces the distortion in the monthly flows that results from the adjustment step.

The disaggregation model has substantial data requirements. When the dimension of Z y is n and the dimension of the generated vector X y is m, the A matrix has mn elements. The lower diagonal B matrix and the symmetric S xx matrix, upon which it depends, each have m(m + 1)/2 nonzero or nonredundant elements. For example, when disaggregating two aggregate annual flow series to monthly flows at five sites, n = 2 and m = 12 × 5 = 60 so that A has 120 elements while B and S xx each have 1830 nonzero or nonredundant parameters. As the number of sites included in the disaggregation increases, the size of S xx and B increases rapidly. Very quickly the model can become over parameterized, and there will be insufficient data to estimate all parameters (Grygier and Stedinger 1988).

In particular, one can think of Eq. 6.178 as a series of linear models generating each monthly flow \( {X}_{ty}^{k} \) for k = 1, t = 1, …, 12; k = 2, t = 1, …, 12 up to k = n, t = 1, …, 12 that reproduces the correlation of each \( {X}_{ty}^{k} \) with all n annual flows, \( {Z}_{y}^{k} \), and all previously generated monthly flows. Then when one gets to the last flow in the last month, the model will be attempting to reproduce n + (12n − 1) = 13n − 1 annual to monthly and cross-correlations . Because the model implicitly includes a constant, this means one needs k* = 13n years of data to obtain a unique solution for this critical equation. For n = 3, k* = 39. One could say that with a record length of 40 years, there would be only 1 degree of freedom left in the residual model error variance described by B. That would be unsatisfactory.

When flows at many sites or in many seasons are required, the size of the disaggregation model can be reduced by disaggregation of the flows in stages and not attempting to explicitly reproduce every season-to-season correlation by constructing what have been called condensed and contemporaneous models (Lane 1979; Stedinger and Vogel 1984; Gryier and Stedinger 1988; Koutsoyiannis and Manetas 1996). Condensed models do not attempt to reproduce the cross-correlations among all the flow variates at the same site within a year (Lane 1979; Stedinger et al. 1985), whereas contemporaneous models are like the Markov model developed earlier in Sect. 8.5 and are essentially models developed for individual sites whose innovation vectors V y have the needed cross-correlations to reproduce the cross-correlations of the concurrent flows (Camacho et al. 1985), as was done in Eq. 6.177. Grygier and Stedinger (1991) describe how this can be done for a condensed disaggregation model without generating inconsistencies.

6.8.6.2 Aggregation Models

One can start with annual or seasonal flows, and break them down into flows in shorter periods representing months or weeks. Or instead one can start with a model that describes flows and the shortest time step included in the model; this latter approach has been referred to as aggregation model to distinguish it from the disaggregation approach.

One method for generating multiseason flow sequences is to convert the time series of seasonal flows Q ty into a homogeneous sequence of normally distributed zero-mean unit-variance random variables Z ty . These can then be modeled by an extension of the annual flow generators that have already been discussed. This transformation can be accomplished by fitting a reasonable marginal distribution to the flows in each season so as to be able to convert the observed flows \( q_{ty}^{s} \) into their transformed counterparts \( z_{ty}^{s} \), and vice versa. Particularly, when shorter streamflow records are available, these simple approaches may yield a reasonable model of some streams for some studies. However, it implicitly assumes that the standardized series is stationary in the sense that the season-to-season correlations of the flows do not depend on the seasons in question. This assumption seems highly questionable.

This theoretical difficulty with the standardized series can be overcome by introducing a separate streamflow model for each month. For example, the classic Thomas-Fiering model (Thomas and Fiering 1962) of monthly flows may be written

$$ Z_{t + 1,y} = \beta_{t} Z_{ty} + \sqrt {1 - \beta_{t}^{2}} V_{ty} $$
(6.184)

where the Z ty ’s are standard normal random variables corresponding to the streamflow in season t of year y, β t is the season-to-season correlation of the standardized flows, and V ty are independent standard normal random variables. The problem with this model is that it often fails to reproduce the correlation among months during a year and thus misrepresents the risk of multi-month and multi-year droughts (Hoshi et al. 1978).

For an aggregation approach to be attractive it is necessary to use a model with greater persistence than the Thomas-Fiering model . Time series models that allow reproduction of different correlation structures are the Box-Jenkins Autoregressive-Moving average models (Box et al. 1994). These models are presented by the notation ARMA(p, q) for a model which depends on p previous flows, and q extra innovations V ty . For example, Eq. 6.142 would be called an AR(1) or AR(1, 0) model. A simple ARMA(1, 1) model is

$$ Z_{t + 1} = \phi_{1} \cdot Z_{t} + V_{t + 1} - \theta_{1} \cdot V_{t} $$
(6.185)

The correlations of this model have the values

$$ {\rho }_{1} = (1 - {\theta }_{1} \phi_{1} )\left( {\phi_{1} - {\theta }_{1} } \right)/\left( {1 + {\theta }_{1}^{2} - 2\phi_{1} {\theta }_{1} } \right) $$
(6.186)

for the first lag. For i > 1

$$ \rho_{i} = \phi^{i - 1} \rho_{1} $$
(6.187)

For \( \phi \) values near one and 0 < θ 1 < ϕ 1 the autocorrelations ρ k can decay much slower than those of the standard AR(1) model.

The correlation function ρ k of general ARMA(p, q) model

$$ Z_{t + 1} = \sum\limits_{i = 1}^{p} {\phi_{i} \cdot Z_{t + 1 - i} + V_{t + 1} } - \sum\limits_{j = 1}^{q} {\theta_{j} \cdot V_{t + 1 - j} } $$
(6.188)

is a complex function that must satisfy a number of conditions to ensure the resultant model is stationary and invertible (Box et al. 1994).

ARMA(p, q) models have been extended to describe seasonal flow series by having their coefficients depend upon the season—these are called periodic Autoregressive-Moving average models , or PARMA. Salas and Obeysekera (1992), Salas and Fernandez (1993), and Claps et al. (1993) discuss the conceptual basis of such stochastic streamflow models. For example, Salas and Obeysekera (1992) found that low-order PARMA models, such as a PARMA(2,1), arise from reasonable conceptual representations of persistence in rainfall, runoff, and groundwater recharge and release. Claps et al. (1993, p. 2553) observe that the PARMA(2, 2) model which may be needed if one wants to preserve year-to-year correlation poses a parameter estimation challenge (see also Rasmussen et al. 1996). The PARMA (1, 1) model is more practical and easy to extend to the multivariate case (Hirsch 1979; Stedinger et al. 1985; Salas 1993; Rasmussen et al. 1996). Experience has shown that PARMA(1, 1) models do a better job of reproducing the correlation of seasonal flows beyond lag one (see for example, Bartolini and Salas 1993).

6.9 Stochastic Simulation

This section introduces stochastic simulation. Much more detail on simulation is contained in later parts of this chapter and in the next chapter. Simulation is the most flexible and widely used tool for the analysis of complex water resources systems. Simulation is trial and error . One must define the system being simulated, both its design and operating policy , and then simulate it to see how it works. If the purpose is to find the best design and policy, many such alternatives must be simulated.

As with optimization models , simulation models may be deterministic or stochastic. One of the most useful tools in water resource systems planning is stochastic simulation. While optimization can be used to help define reasonable design and operating policy alternatives to be simulated, it takes those simulations to obtain the best insights of just how each such alternative will perform. Stochastic simulation of complex systems on digital computers provides planners with a way to define the probability distribution of performance indices of complex stochastic water resources systems.

When simulating any system, the modeler designs an experiment. Initial flow, storage, and water quality conditions must be specified if these are being simulated. For example, reservoirs can start full, empty, or at random representative conditions. The modeler also determines what data are to be collected on system performance and operation and how they are to be summarized. The length of time the simulation is to be run must be specified and, in the case of stochastic simulations, the number of runs to be made must also be determined. These considerations are discussed in more detail by Fishman (2001) and in other books on simulation. The use of stochastic simulation and the analysis of the output of such models are introduced here primarily in the context of an example to illustrate what goes into a simulation model and how one can deal with the information that is generated.

6.9.1 Generating Random Variables

Included in any stochastic simulation model is some provision for the generation of sequences of random numbers that represent particular values of events such as rainfall, streamflows, or floods. To generate a sequence of values for a random variable, probability distributions for the variables must be specified. Historical data and an understanding of the physical processes are used to select appropriate distributions and to estimate their parameters (as discussed in Sect. 6.3.2).

Most computers have algorithms for generating random numbers uniformly distributed between zero and one. This uniform distribution is defined by its cdf and pdf;

$$ \begin{aligned}F_{u} ({u}) &= 0\quad{\text{for}}\;u \le 0,\\&\quad u \; {\text{for}}\;0 \le u \le 1\quad{\text{and}}\quad 1\quad{\text{if}}\;u \ge 1\end{aligned} $$
(6.189)

and

$$ f_{U} (u) = 1\quad{\text{if}}\;0 \le {u} \le 1\quad{\text{and}}\quad 0\;{\text{otherwise}} $$
(6.190)

These uniform random variables can then be transformed into random variables with any desired distribution. If F Q (q t ) is the cumulative distribution function of a random variable Q t in period t, then Q t can be generated using the inverse function, as

$$ Q_{t} = F_{Q}^{ - 1} [U_{t} ] $$
(6.191)

Here U t is the uniform random number used to generate Q t . This is illustrated in Fig. 6.13.

Fig. 6.13
figure 13

The probability distribution of a random variable can be inverted to produce values of the random variable

Analytical expressions for the inverse of many distributions, such as the normal distribution, are not known, so that special algorithms are employed to efficiently generate deviates with these distributions (Fishman 2001).

6.9.2 River Basin Simulation

An example will demonstrate the use of stochastic simulation in the design and analysis of water resource systems. Assume that farmers in a particular valley have been plagued by frequent shortages of irrigation water. They currently draw water from an unregulated river to which they have water rights. A government agency has proposed construction of a moderate-size dam on the river upstream from points where the farmers withdraw water. The dam would be used to increase the quantity and reliability of irrigation water available to the farmers during the summer growing season.

After preliminary planning, a reservoir with an active capacity of 4 × 107 m3 has been proposed for a natural dam site. It is anticipated that because of the increased reliability and availability of irrigation water, the quantity of water desired will grow from an initial level of 3 × 107 m3/yr after construction of the dam to 4 × 107 m3/yr within 6 years. After that, demand will grow more slowly, to 4.5 × 107 m3/yr, the estimated maximum reliable yield. The projected demand for summer irrigation water is shown in Table 6.12.

Table 6.12 Projected water demand for irrigation water

A simulation study will evaluate how the system can be expected to perform over a 20-year planning period. Table 6.13 contains statistics that describe the hydrology at the dam site. The estimated moments are computed from the 45-year historic record. .

Table 6.13 Characteristics of the river flow

Using the techniques discussed in the previous section, a Thomas-Fiering model is used to generate 25 lognormally distributed synthetic streamflow sequences. The statistical characteristics of the synthetic flows are those listed in Table 6.14. Use of only the 45-year historic flow sequence would not allow examination of the system’s performance over the large range of streamflow sequences which could occur during the 20-year planning period. Jointly, the synthetic sequences should be a description of the range of inflows that the system might experience. A larger number of sequences could be generated.

Table 6.14 Results of 25 20-year simulations

6.9.3 The Simulation Model

The simulation model is composed primarily of continuity constraints and the proposed operating policy. The volume of water stored in the reservoir at the beginning of seasons 1 (winter) and 2 (summer) in year y are denoted by S 1y and S 2y . The reservoir’s winter operating policy is to store as much of the winter’s inflow Q 1y as possible. The winter release R 1y is determined by the rule

$$ R_{1y} = \left\{ {\begin{array}{*{20}l} {S_{1y} + Q_{1y} - K} \hfill & {{\text{if }}S_{1y} + Q_{1y} - R_{\hbox{min} } } > K \hfill \\ {R_{\hbox{min} } } \hfill & {{\text{if}}\;K \ge S_{1y} + Q_{1y} - R_{\hbox{min} } \ge 0} \hfill \\ {S_{1y} + Q_{1y} } \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. $$
(6.192)

where K is the reservoir capacity of 4 × 107 m3 and R min is 0.50 × 107 m3, the minimum release to be made if possible. The volume of water in storage at the beginning of the year’s summer season is

$$ S_{2y} = S_{1y} + Q_{1y} - R_{1y} $$
(6.193)

The summer release policy is to meet each year’s projected demand or target release D y , if possible, so that

$$ \begin{aligned}\begin{array}{*{20}l} {R_{{2{y}}} = S_{{2{y}}} + Q_{{2{y}}} - K} \hfill & {{\text{if}}\,S_{{2{y}}} + Q_{{2{y}}} - D_{y} > K} \hfill \\ {\quad\;\;\;\, = D_{y} } \hfill & {{\text{if}}\;0 \le S_{2y} + Q_{2y} - D_{y} \le K} \hfill \\ {\quad\;\;\;\, = S_{2y} + Q_{2y} } \hfill & {\text{otherwise}} \hfill \\ \end{array} \end{aligned}$$
(6.194)

This operating policy is illustrated in Fig. 6.14.

Fig. 6.14
figure 14

Summer reservoir operating policy. The shaded area denotes the feasible region of reservoir releases

The volume of water in storage at the beginning of the next winter season is

$$ S_{{1,{y} + 1}} = S_{2y} + Q_{2y} - R_{2y} $$
(6.195)

6.9.4 Simulation of the Basin

The question to be addressed by this simulation study is how well will the reservoir meet the farmers’ water requirements. Three steps are involved in answering this question. First, one must define the performance criteria or indices to be used to describe the system’s performance. The appropriate indices will, of course, depend on the problem at hand and the specific concerns of the users and managers of a water resource system. In our reservoir-irrigation system, several indices will be used relating to the reliability with which target releases are met and the severity of any shortages.

The next step is to simulate the proposed system to evaluate the specified indices. For our reservoir-irrigation system, the reservoir’s operation was simulated 25 times using the 25 synthetic streamflow sequences, each 20 years in length. Each of the 20 simulated years consisted of first a winter and then a summer season. At the beginning of the first winter season, the reservoir was taken to be empty (S 1y  = 0 for y = 1) because construction would just have been completed. The target release or demand for water in each year is given in Table 6.12.

After simulating the system, one must proceed to interpret the resulting information so as to gain an understanding of how the system might perform both with the proposed design and operating policy and with modifications in either the system’s design or its operating policy. To see how this may be done, consider the operation of our example reservoir-irrigation system.

The reliability p y of the target release in year y is the probability that the target release D y is met or exceeded in that year:

$$ P_{y} = \Pr [R_{2y} \ge D_{y} ] $$
(6.196)

The system’s reliability is a function of the target release D y , the hydrology of the river, the size of the reservoir, and the operating policy of the system. In this example, the reliability also depends on the year in question. Figure 6.15 shows the total number of failures that occurred in each year of the 25 simulations. In 3 of the 25 simulations, the reservoir did not contain sufficient water after the initial winter season to meet the demand the first summer. After year 1, few failures occur in years 2 through 9 because of the low demand. Surprisingly few failures occur in years 10 and 13, when demand has reached its peak; this results because the reservoir was normally full at the beginning of this period as a result of lower demand in the earlier years. Starting in years 14 and after, failures occurred more frequently because of the high demand placed on the system. Thus one has a sense for how the reliability of the target releases changes over time.

Fig. 6.15
figure 15

Number of failures in each year of 25 twenty-year simulations

6.9.5 Interpreting Simulation Output

Table 6.14 contains several summary statistics of the 25 simulations. Column 2 of the table contains the average failure frequency in each simulation, which equals the number of years the target release was not met divided by 20, the number of years simulated. At the bottom of column 2 and the other columns are several statistics that summarize the 25 values of the different performance indices. The sample estimates of the mean and variance of each index are given as one way of summarizing the distribution of the observations. Another approach is specification of the sample median , the approximate interquartile range [x (6) − x (20)], and/or the range [x (1) − x (25)] of the observations, where x (i) is the ith largest observation. Either set of statistics could be used to describe the center and spread of each index’s distribution.

Suppose that one is interested in the distribution of the system’s failure frequency or, equivalently, the reliability with which the target can be met. Table 6.14 reports that the mean failure rate for the 25 simulations is 0.084, implying that the average reliability over the 20-year period is 1 − 0.084 = 0.916 or 92%. The median failure rate is 0.05, implying a median reliability of 95%. These are both reasonable estimates of the center of the distribution of the failure frequency. Note that the actual failure frequency ranged from 0 (seven times) to 0.30. Thus the system’s reliability ranged from 100% to as low as 70, 75, and 80% in runs 17, 8, and 11. Certainly, the farmers are interested not only in knowing the mean or median failure frequency but also the range of failure frequencies they are likely to experience.

If one knew the form of the distribution function of the failure frequency, one could use the mean and standard deviation of the observations to determine a confidence interval within which the observations would fall with some prespecified probability. For example, if the observations are normally distributed, there is a 90% probability that the index falls within the interval μ x  ± 1.65σ x . Thus, if the simulated failure rates are normally distributed, there is about a 90% probability the actual failure rate is within the interval \( \bar{x} \) ± 1.65s x . In our case this interval would be [0.084 − 1.65(0.081), 0.084 + 1.65(0.081)] = [−0.050, 0.218].

Clearly, the failure rate cannot be less than zero, so that this interval makes little sense in our example.

A more reasonable approach to describing the distribution of a performance index whose probability distribution function is not known is to use the observations themselves. If the observations are of a continuous random variable , the interval [x (i) − x (n+1−i)] provides a reasonable estimate of an interval within which the random variable falls with probability

$$ P = \frac{n + 1 - i}{n + 1} - \frac{i}{n + 1} = \frac{n + 1-2i}{n + 1} $$
(6.197)

In our example, the range [x (1) − x (25)] of the 25 observations is an estimate of an interval in which a continuous random variable falls with probability (25 + 1 − 2)/(25 + 1) = 92%, while [x (6) − x (20)] corresponds to probability (25 + 1 – 2 × 6)/(25 + 1) = 54%.

Table 6.14 reports that for the failure frequency, [x (1) − x (25)] equals [0 − 0.30], while [x (6) − x (20)] equals [0 − 0.15]. Reflection on how the failure frequencies are calculated reminds us that the failure frequency can only take on the discrete, nonnegative values 0, 1/20, 2/20, …, 20/20. Thus, the random variable X cannot be less than zero. Hence, if the lower endpoint of an interval is zero, as is the case here, then [0 − x (k)] is an estimate of an interval within which the random variable falls with a probability of at least k/(n + 1). For k equal to 20 and 25, the corresponding probabilities are 77 and 96%.

Often, the analysis of a simulated system’s performance centers on the average value of performance indices, such as the failure rate . It is important to know the accuracy with which the mean value of an index approximates the true mean . This is done by the construction of confidence intervals . A confidence interval is an interval that will contain the unknown value of a parameter with a specified probability. Confidence intervals for a mean are constructed using the t statistic,

$$ t = \frac{{\bar{x} - \mu_{x} }}{{s_{x} /\sqrt n }} $$
(6.198)

which for large n has approximately a standard normal distribution. Certainly, n = 25 is not very large, but the approximation to a normal distribution may be sufficiently good to obtain a rough estimate of how close the average frequency of failure \( \bar{x} \) is likely to be to μ x . A 100(1 − 2α)% confidence interval for μ x is, approximately,

$$ \bar{x} - t_{\alpha } \frac{{s_{x} }}{\sqrt n } \le \mu_{x} \le \bar{x} + t_{\alpha } \frac{{s_{x} }}{\sqrt n } $$

or

$$ 0.084 - t_{\alpha } \left( {\frac{0.081}{{\sqrt {25} }}} \right) \le \mu_{x} \le 0.084 + t_{\alpha } \left( {\frac{0.081}{{\sqrt {25} }}} \right) $$
(6.199)

If α = 0.05, then t α  = 1.65 and Eq. 6.199 becomes 0.057 ≤ μ x  ≤ 0.11.

Hence, based on the simulation output, one can be about 90% sure that the true mean failure frequency lies between 5.7 and 11%. This corresponds to a reliability between 89 and 94%. By performing additional simulations to increase the size of n, the width of this confidence interval can be decreased. However, this increase in accuracy may be an illusion, because the uncertainty in the parameters of the streamflow model has not been incorporated into the analysis.

Failure frequency or system reliability describes only one dimension of the system’s performance. Table 6.14 contains additional information on the system’s performance related to the severity of shortages. Column 3 lists the frequencies with which the shortage exceeded 20% of that year’s demand. This occurred in approximately 2% of the years, or in 24% of the years in which a failure occurred. Taking another point of view, failures in excess of 20% of demand occurred in 9 out of 25, or in 36% of the simulation runs

Columns 4 and 5 of Table 6.14 contain two other indices that pertain to the severity of the failures. The total shortfall in Column 4 is calculated as

$$ {\text{TS}} - \sum\limits_{y - 1}^{20} {\left[ {D_{2y} - R_{2y} } \right]}^{ + } $$

where

$$ \left[ Q \right]^{ + } = \left\{ {\begin{array}{*{20}l} Q \hfill & {{\text{if}}\,{Q} > 0} \hfill \\ 0 \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. $$
(6.200)

The total shortfall equals the total amount by which the target release is not met in years in which shortages occur.

Related to the total shortfall is the average deficit. The deficit is defined as the shortfall in any year divided by the target release in that year. The average deficit is

$$ {\text{AD}} = \frac{1}{m}\sum\limits_{y = 1}^{20} {\frac{{\left[ {D_{2y} - R_{2y} } \right]}}{{D_{2y} }}} $$
(6.201)

where m is the number of failures (deficits ) or nonzero terms in the sum.

Both the total shortfall and the average deficit measure the severity of shortages. The mean total shortfall \( \overline{TS} \), equal to 1.00 for the 25 simulation runs, is a difficult number to interpret. While no shortage occurred in seven runs, the total shortage was 4.7 in run 8, in which the shortfall in two different years exceeded 20% of the target . The median of the total shortage values, equal to 0.76, is an easier number to interpret in that one knows that half the time the total shortage was greater and half the time less than this value.

The mean average deficit \( \overline{A}\bar{D} \) is 0.106, or 11%. However, this mean includes an average deficit of zero in the seven runs in which no shortages occurred. The average deficit in the 18 years in which shortages occurred is (11%) (25/18) = 15%. The average deficit in individual simulations in which shortages occurred ranges from 4 to 43%, with a median of 11.5%.

After examining the results reported in Table 6.14, the farmers might determine that the probability of a shortage exceeding 20% of a year’s target release is higher than they would like. They can deal with more frequent minor shortages, not exceeding 20% of the target, with little economic hardship, particularly if they are warned at the beginning of the growing season that less than the targeted quantity of water will be delivered. Then they can curtail their planting or plant crops requiring less water.

In an attempt to find out how better to meet the farmers’ needs, the simulation program was rerun with the same streamflow sequences and a new operating policy in which only 80% of the growing season’s target release is provided (if possible) if the reservoir is less than 80% full at the end of the previous winter season. This gives the farmers time to adjust their planting schedules and may increase the quantity of water stored in the reservoir to be used the following year if the drought persists.

As the simulation results with the new policy in Table 6.15 demonstrate, this new operating policy appears to have the expected effect on the system’s operation. With the new policy, only six severe shortages in excess of 20% of demand occur in the 25 twenty-year simulations, as opposed to 10 such shortages with the original policy. In addition, these severe shortages are all less severe than the corresponding shortages that occur with the same streamflow sequence when the original policy is followed.

Table 6.15 Results of 25 20-Year simulations with modified operating policy to avoid severe shortages

The decrease in the severity of shortages is obtained at a price. The overall failure frequency has increased from 8.4 to 14.2%. However, the latter figure is misleading because in 14 of the 25 simulations, a failure occurs in the first simulation year with the new policy, whereas only three failures occur with the original policy. Of course, these first-year failures occur because the reservoir starts empty at the beginning of the first winter and often does not fill that season.

Ignoring these first-year failures, the failure rates with the two policies over the subsequent 19 years are 8.2 and 12.0%. Thus the frequency of failures in excess of 20% of demand is decreased from 2.0 to 1.2% by increasing the frequency of all failures after the first year from 8.2 to 12.0%. Reliability increases while vulnerability decreases. If the farmers are willing to put up with more frequent minor shortages, it appears they can reduce their risk of experiencing shortages of greater severity.

The preceding discussion has ignored the statistical issue of whether the differences between the indices obtained in the two simulation experiments are of sufficient statistical reliability to support the analysis. If care is not taken, observed changes in a performance index from one simulation experiment to another may be due to sampling fluctuations rather than to modifications of the water resource system’s design or operating policy.

As an example, consider the change that occurred in the frequency of shortages. Let X 1i and X 2i be the simulated failure rates using the ith streamflow sequence with the original and modified operating policies. The random variables Y i  = X 1i  − X 2i for i equal 1 through 25 are independent of each other if the streamflow sequences are generated independently, as they were.

One would like to confirm that the random variable Y tends to be negative more often than it is positive and hence that policy 2 indeed results in more failures overall. A direct test of this theory is provided by the sign test. Of the 25 paired simulation runs, y i  < 0 in 21 cases and y i  = 0 in four cases. We can ignore the times when y i  = 0. Note that if y i  < 0 and y i  > 0 were equally likely, then the probability of observing y i  < 0 in all 21 cases when y i  ≠ 0 is 2−21 or 5 × 10−7. This is exceptionally strong proof that the new policy has increased the failure frequency.

A similar analysis can be made of the frequency with which the release is less than 80% of the target. Failure frequencies differ in the two policies in only four of the 25 simulation runs. However, in all 4 cases where they differ, the new policy resulted in fewer severe failures. The probability of such a lopsided result, were it equally likely that either policy would result in a lower frequency of failures in excess of 20% of the target, is 2−4 = 0.0625. This is fairly strong evidence that the new policy indeed decreases the frequency of severe failures.

Another approach to this problem is to ask if the difference between the average failure rates \( \bar{x}_{1}\; {\text{ and }}\;\bar{x}_{2} \) is statistically significant; that is, can the difference between \( x_{1}\; {\text{ and }}\;x_{2} \) be attributed to the fluctuations that occur in the average of any finite set of random variables? In this example, the significance of the difference between the two means can be tested using the random variable Y i defined as X 1i  − X 2i for i equal 1 through 25. The mean of the observed y i ’s is

$$ \bar{y} = \frac{1}{25}\sum\limits_{i - 1}^{25} {\left( {x_{1i} - x_{2i} } \right)} = \bar{x}_{1} - \bar{x}_{2} = 0.084 - 0.142 = - 0.058 $$
(6.202)

and their variance is

$$ s_{y}^{2} = \frac{1}{25}\sum\limits_{i = 1}^{25} {\left( {x_{1i} - x_{2i} - \bar{y}} \right)}^{2} = \left( {0.0400} \right)^{2} $$
(6.203)

Now if the sample size n, equal to 25 here, is sufficiently large, then t defined by

$$ t = \frac{{\bar{y} - \mu_{Y} }}{{s_{Y} /\sqrt n }} $$
(6.204)

has approximately a standard normal distribution. The closer the distribution of Y is to that of the normal distribution, the faster the convergence of the distribution of t is to the standard normal distribution with increasing n. If X 1i  − X 2i is normally distributed, which is not the case here, then each Y i has a normal distribution and t has Student’s t-distribution .

If E[x 1i ] = E[x 2i ], then μ Y equals zero and upon substituting the observed values of \( \bar{y} \) and \( s_{Y}^{2} \) into Eq. 6.204, one obtains

$$ t = \frac{ - 0.0580}{{0.0400/\sqrt {25} }} = - 7.25 $$
(6.205)

The probability of observing a value of t equal to −7.25 or smaller is less than 0.1% if n is sufficiently large that t is normally distributed. Hence it appears very improbable that μ Y equals zero.

This example provides an illustration of the advantage of using the same streamflow sequences when simulating both policies. Suppose that different streamflow sequences were used in all the simulations. Then the expected value of Y would not change, but its variance would be given by

$$ \begin{aligned} {\text{Var}}(Y) & = {E}[X_{1} - X_{2} - (\mu_{1} - \mu_{2} )]^{2} \\ & = {E}[(X_{1} - \mu_{1} )2] \\& \quad - {2E}[(X_{1} - \mu_{1} )(X_{2} - \mu_{2} )]\\&\quad {E}[(X_{2} - \mu_{2} )^{2} ] \\ & = \sigma_{{x_{1} }}^{2} - 2\,{\text{Cov}}\left( {X_{1} ,\,X_{2} } \right) + \sigma_{{X_{2} }}^{2} \\ \end{aligned} $$
(6.206)

where Cov(X 1, X 2) = E[(X 1 − μ 1)(X 2 − μ 2)] and is the covariance of the two random variables . The covariance between X 1 and X 2 will be zero if they are independently distributed as they would be if different randomly generated streamflow sequences were used in each simulation. Estimating \( \sigma_{{x_{1} }}^{2} \;{\text{and}}\;\sigma_{{{x}_{2} }}^{2} \) by their sample estimates, an estimate of what the variance of Y would be if Cov(X 1, X 2) were zero is

$$ \hat{\sigma }_{Y}^{2} = s_{{x_{1} }}^{2} + s_{{x_{2} }}^{2} = \left( {0.081} \right)^{2} + \left( {0.087} \right)^{2} = \left( {0.119} \right)^{2} $$
(6.207)

The actual sample estimate s Y equals 0.040; if independent streamflow sequences are used in all simulations, s Y will take a value near 0.119 rather than 0.040 (Eq. 6.203). A standard deviation of 0.119 yields a value of the test statistic

$$ t = \frac{{\bar{y} - \mu_{Y} }}{{0.119/\sqrt {25} }}\left| {_{\mu Y = 0} } \right. = - 2.44 $$
(6.208)

If t is normally distributed, the probability of observing a value less than −2.44 is about 0.8%. This illustrates that use of the same streamflow sequences in the simulation of both policies allows one to better distinguish the differences in the policies’ performance. Using the same streamflow sequences, or other random inputs, one can construct a simulation experiment in which variations in performance caused by different random inputs are confused as little as possible with the differences in performance caused by changes in the system’s design or operating policy .

6.10 Conclusions

This chapter has introduced some approaches analysts can consider and use when working with the randomness or uncertainty of their data. Most of the data water resource systems analysts use is uncertain. This uncertainty comes from not understanding as well as we would like how our water resource systems (including its ecosystems ) function as well as not being able to forecast, perfectly, the future. It is that simple. We do not know the exact amounts, qualities, and their distributions over space and time of both the supplies of water we manage and the water demands we try to meet. We also do not know the benefits and costs, however measured, of any actions we take to manage both water supply and water demand.

The chapter began with an introduction to some probability concepts and methods for describing random variables and parameters of their distributions. It then reviewed some of the commonly used probability distributions and how to determine the distributions of sample data, how to work with censored and partial duration series data, methods of regionalization , stochastic processes and time series analyses.

The chapter concluded with an introduction to a range of univariate and multivariate stochastic models that are used to generate stochastic streamflow, precipitation depths , temperatures , and evaporation . These methods have been widely used to generate temporal and spatial stochastic process that serve as inputs to stochastic simulation models for system design, for system operations studies, and for the evaluation of the reliability and precision of different estimation algorithms. The final section of this chapter provides an example of stochastic simulation, and the use of statistical methods to summarize the results of such simulations.

This chapter is merely an introduction to some of the tools available for use when dealing with uncertain data. Many of the concepts introduced in this chapter will be used in the chapters that follow on constructing and implementing various types of optimization, simulation, and statistical models. The references provided in the next section provide additional and more detailed information.

Although many of the methods presented in this and the following two chapters can describe many of the characteristics and consequences of uncertainty, it is unclear as to whether or not society knows exactly what to do with that information. Nevertheless there seems to be an increasing demand from stakeholders involved in planning processes for information related to the uncertainty associated with the impacts predicted by models. The challenge is not only to quantify that uncertainty, but also to communicate it in effective ways that inform, and not confuse, the decision-making process.