Throughout this book, various concepts from statistics and probability will be used which are essential for understanding and constructing forecasts. In this section some of the fundamental tools and definitions will be presented. It is assumed that the reader knows some basic probability and statistical concepts, and this chapter is only intended as a refresher of the main ideas which will be used in other parts of the book. This chapter will go over basic definitions of distributions, methods for estimating them as well as introduce some important concepts such as autocorrelation and cross-correlation. For a more detailed description of basic statistics the authors recommend an introductory text such as [1] (In addition see the further reading material listed in Appendix D).

3.1 Univariate Distributions

Real world data typically has some degree of uncertainty with the values it takes distributed over some (potentially infinite) range of points. A major part of probabilistic forecasts is trying to accurately describe, or model, the distribution of the values of interest. For the purposes of this book, distributions will be used to describe probabilistic forecasts and hence understand the uncertainty associated to the estimates they produce. Note that the focus will be on demand data and hence the methods will typically apply to real continuous variables (as opposed to discrete/categorical variables). In this section only univariate distributions will be considered, i.e., those which describe a single real variable. We define a continuous variable, whose values depend on a random process and has a continuous distribution, as a random variable. Notice in the following, the random variable will be denoted with a capital letter, e.g. X, whereas lower case variables, e.g. x, will refer to particular realisations/observation of that random variable.

One of the most common ways to describe the distribution of a random variable X is through its probability density function or PDF, \(f_X(x)\), over some (possibly infinite) interval \(x \in (a, b) \subseteq \mathbb {R}\). The PDF is a non-negative function which describes the relative likelihood that any value \(x \in (a, b)\) will be observed and can be used to calculate the probability of the variable taking some value within an interval \((a_1, b_1) \subseteq (a, b)\) as follows

$$\begin{aligned} P\{a_1 \le X \le b_1\} = \int _{a_1}^{b_1}f_X(x) dx. \end{aligned}$$
(3.1)

Note that the integral of the PDF is bounded above by one by definition.

An alternative but equally important representation of the distribution is the cumulative distribution function or CDF. Again assume the CDF is defined over some (possibly infinite) interval \(x \in (a, b) \subseteq \mathbb {R}\). The CDF represents the probability that the random variable X will take a value less than or equal to some specified value x, and is often written as a function \(F_X(x) = P\{ X \le x\}\). It is related to the PDF via

$$\begin{aligned} F_X(x) = \int _{a}^{x}f_X(t) dt. \end{aligned}$$
(3.2)

I.e. an integration of \(f_X(t)\) for t over the interval (ax). Notice that the CDF is a monotonically increasing function, which means that if \(x_1\le x_2\) then \(F_X(x_1) \le F_X(x_2)\). Also it satisfes the limits, \(\lim _{x \longrightarrow a}F_X(x) = 0\) and \(\lim _{x \longrightarrow b}F_X(x) = 1\).

The expected value and variance are two important derived values associated to a PDF/CDF. The expected value is defined as

$$\begin{aligned} \mathbb {E}(X) = \int _{a}^{b}t f_X(t) dt, \end{aligned}$$
(3.3)

and the variance as

$$\begin{aligned} \mathbb {V}ar(X) = \int _{a}^{b}(t-\mu )^2 f_X(t) dt, \end{aligned}$$
(3.4)

where \(\mu =\mathbb {E}(X)\). The expectation (or mean) essentially represents a weighted average of the values of the random variable X with values weighted by the probability density \(f_X\). It acts as a typical, or expected, value of the random variable.

The variance is the expected value for the squared deviation from the mean. It is often used to represent the spread of the data from the mean and hence is a simple measure for the uncertainty of the random variable. The square root of the variance is known as the standard deviation (often denoted \(\sigma = \sqrt{\mathbb {V}ar(X)}\)).

One of the most commonly studied, and important, distributions is the one dimensional Gaussian (also known as the Normal) distribution which has a PDF defined by

$$\begin{aligned} f(x) = \frac{1}{\sqrt{2\pi } \sigma } \exp \left( -\frac{(x-\mu )^2}{\sigma ^2} \right) . \end{aligned}$$
(3.5)

Thus the Gaussian is defined entirely by two parameters, namely the mean \(\mu \) and the variance \(\sigma ^2\). When \(\mu =0\) and \(\sigma =1\) then (3.5) is known as the standard normal distribution.

Fig. 3.1
figure 1

Examples of the PDF (left) and the CDF (right) for the Gaussian distributions with different means and standard deviations

An example of the Gaussian distribution for various means and standard deviations is shown in Fig. 3.1. Notice that the Gaussian distribution is always bell-shaped and is symmetric around the mean value. Also the bigger the variance the wider the distribution, as expected. In the plot for the corresponding cumulative distribution, smaller variances translate to steeper gradients.

Not all variables are Gaussian distributed, or even symmetric. There are a whole host of other parametric families of distributions. The lognormal distribution is suitable for variables which are positively skewed distributions with long tails to the right and has PDF defined by

$$\begin{aligned} f(x) = \frac{1}{x \sqrt{2 \pi } \sigma } \exp \left( \frac{-(\ln (x)-\mu )^2}{\sigma ^2} \right) , \end{aligned}$$
(3.6)

for parameters \(\mu \) and \(\sigma \). Notice that this is simply a Gaussian distribution (3.5) but for the logarithm of the variable. There are also more general distributions such as the gamma distributions which can represent a whole range of different distribution shapes. The gamma CDF has the relatively complicated form

$$\begin{aligned} Gamma(X, \alpha , \beta ) = \frac{1}{\beta ^\alpha \Gamma (\alpha )}\int _{0}^{X} t^{\alpha -1} e^{-t/\beta } dt, \end{aligned}$$
(3.7)

for parameters \(\alpha , \beta \) often called the shape and scale parameters respectively and \(\Gamma (x) = \int _0^\infty t^{x-1}e^{-t} dt\) is the so-called gamma function. The PDFs for the lognormal and gamma distribution for various values of their parameters are shown in Fig. 3.2.

Fig. 3.2
figure 2

Examples of PDFs for a the lognormal and b the gamma distirbution for various values of their parameters

The Gaussian, gamma and lognormal distribution are examples of parametric distribution functions because they are defined completely in terms of their input parameters. There are also nonparametric distributions which do not assume that the data come from any specific parametric family of functions. Kernel density estimation is a popular method for non-parametrically estimating a distribution and will be described in Sect. 3.4. In this book, the main focus will be on nonparametric models, but the Gaussian distribution will also be commonly used especially when modelling the distribution of errors.

figure a

3.2 Quantiles and Percentiles

The continuous CDF admits an inverse function \(F_X^{-1}\) which can take any \(q \in [0, 1]\) and give a unique value \(x_q = F_X^{-1}(q)\). This value is the qth quantile, or q-quantile, also known as the 100q percentile [2]. The most well known values are the median which is the 0.5-quantile or 50 percentile and the lower and upper quartile which are the 25th and 75th percentiles, or the 0.25- and 0.75-quantiles respectively. Essentially the q-quantile defines the value in the domain, less than which a q proportion of the data lies. In other words, the proportion of the random variables X which are less than \(x_q\) is q.Footnote 1

Example of the 50 and 90 percentiles for the standard Normal distribution are shown in Fig. 3.3. Notice that q-quantile is simply the domain value corresponding to where the horizontal line at \(y=q\) intersects the CDF, as illustrated in Fig. 3.3. Often the complete CDF is unknown but a finite number of quantiles can be estimated. When enough quantiles are calculated an accurate estimate can be formed of the overall distribution. A technique for estimating the quantiles from observations is given in Sect. 3.4.

Fig. 3.3
figure 3

Illustration of the 0.5- and 0.9 quantiles on the CDF for the standard normal distribution.

Quantiles are important tools for estimating distributions when only samples of the overall population are available and can be used to create probabilistic forecasts as will be demonstrated in Sect. 11.4.

3.3 Multivariate Distributions

Multivariate distributions are an extension of the univariate distributions introduced in Sect. 3.1 to distributions of more than one variable. This time consider N random variables \(X_1, X_2, \ldots , X_N\). Analogous to the PDF for the univariate distribution, is the joint probability distribution

$$\begin{aligned} f_{X_1, X_2, \ldots , X_N}(x_1, x_2, \ldots , x_N) \end{aligned}$$
(3.8)

which, like the PDF describes the relative probability that the set of values \((X_1, X_2, \ldots , X_N)\) will be observed. The CDF for a multivariate distribution is defined as

$$\begin{aligned} F_{X_{1},\ldots ,X_{N}}(x_{1},\ldots ,x_{N})=P \{X_{1}\le x_{1},\ldots ,X_{N}\le x_{N} \}, \end{aligned}$$
(3.9)

and can be written in terms of the joint probability distribution

$$\begin{aligned} F_{X_{1},\ldots ,X_{N}}(x_{1},\ldots ,x_{N}) = \int _{-\infty }^{x_N} \ldots \int _{-\infty }^{x_1} f_{X_1, X_2, \ldots , X_N}(t_1, t_2, \ldots , t_N) dt_1 dt_2 \ldots dt_N. \end{aligned}$$
(3.10)

If \(N=2\) then the multivariate distribution is known as a bivariate distribution. One of the simplest multivariate distributions is the multivariate Gaussian joint distribution defined by

$$\begin{aligned} f_{X_{1},\ldots ,X_{N}}(x_{1},\ldots ,x_{N}) = \frac{1}{(2\pi )^{N/2}det(\boldsymbol{\Sigma })^{1/2}}exp \left( (\textbf{x}- \boldsymbol{\mu })^T\boldsymbol{\Sigma }^{-1}(\textbf{x}- \boldsymbol{\mu }) \right) , \end{aligned}$$
(3.11)

were \(\textbf{x} =(x_{1},\ldots ,x_{N})^T \in \mathbb {R}^N\), \(\boldsymbol{\mu }=(\mu _{1},\ldots ,\mu _{N})^T \in \mathbb {R}^N\) where \(\mu _k\) is the expected value for the random variable \(X_k\) and \(\boldsymbol{\Sigma } \in \mathbb {R}^{N \times N}\) is the covariance matrix and describes the covariance between the variables. The diagonal elements of this matrix describe the variance of the random variables, i.e. \((\boldsymbol{\Sigma })_{k,k} = \mathbb {V}ar(X_k)\), and the off diagonal elements describe the variation of one random variable in relation to another random variable. Consider two random variables \(X_k\) and \(X_m\), then the covariance between these two variables can be written in terms of the expectation as

$$\begin{aligned} Cov(X_k,X_m)=\mathbb {E}[(X_k-\mathbb {E}[X_k])(X_m- \mathbb {E}[X_m])]. \end{aligned}$$
(3.12)

Notice that the covariance matrix is symmetric and positive semi-definite.Footnote 2 The (Pearson) correlation is defined to be

$$\begin{aligned} Corr(X_k,X_m)=Cov(X_k,X_m)/\sigma _k\sigma _m, \end{aligned}$$
(3.13)

and is bounded by \([-1, 1]\) and is a measure of the linear dependence between two variables. In other words, the correlation between two variables is the covariance scaled by the standard deviation of the variables. If two variables are independent (i.e. the change in one variable doesn’t effect the change in the other) then they are uncorrelated and their covariance is equal to zero. For the special case of two variables, the bivariate covariance matrix can be written as

$$ Cov= \begin{bmatrix} \sigma _1^2 &{} \sigma _1\sigma _2 \rho \\ \sigma _1\sigma _2 \rho &{} \sigma _2^2 \end{bmatrix} $$

where \(\rho \in [-1, 1]\) is the correlation \(Corr(X_1,X_2)\) and \(\sigma _1, \sigma _2\) are the standard deviation of the random variables \(X_1\) and \(X_2\). An example of a bivariate Gaussian distribution is shown in Fig. 3.4 for \(\rho = 0.6\), \(\sigma _1^2=0.6\) and \(\sigma _2^2=1\). Here (a) is the joint density and (b) is the joint CDF. The correlation \(\rho \) is relatively large and hence the variables are somewhat correlated with each other.

Fig. 3.4
figure 4

Examples of a bivariate normal PDF (a) and CDF (b)

To simplify the discussion on multivariate distributions, the focus of the rest of this chapter will be on bivariate distributions, but the results extrapolate to more general multivariate distributions.

One of the most important sub-structures of a multivariate distribution is the marginal distribution. Given a joint bivariate distribution, \(f_{X_{1}, X_{2}}(x_{1},x_2)\), the marginal distribution of \(X_1\) describes the distribution of \(X_1\) given no knowledge of \(X_2\) and is found by integrating over \(X_2\)

$$\begin{aligned} f_{X_1}(x_1)= \int _{-\infty }^{\infty } f_{X_1, X_2}(x_1, x_2) dx_2. \end{aligned}$$
(3.14)

Similarly the marginal distribution of \(X_2\) can be defined

$$\begin{aligned} f_{X_2}(x_2)= \int _{-\infty }^{\infty } f_{X_1, X_2}(x_1, x_2) dx_1. \end{aligned}$$
(3.15)

The joint and marginal distributions for a bivariate Gaussian (the same joint as given in Fig. 3.4) are illustrated in Fig. 3.5. Notice that if the random variables \(X_1\) and \(X_2\) are independent then the joint density is simply a product of the marginals for each variable \(f_{X_{1}, X_{2}}(x_{1},x_2) =f_{X_{1}}(x_{1})f_{X_{2}}(x_2)\). The marginals can often be easier to estimate since they only require estimating each individual variable rather than needing to model any interdependencies between them.

If one of the values, say \(X_2\) is observed so its value \(x_2\) is known for certain, then the distribution of \(X_1\) given this particular value is known as the Conditional density and is written \(f_{X_{1}| X_{2}}(x_{1}|x_2)\).

Fig. 3.5
figure 5

Illustration of the joint and marginal of a bivariate Gaussian distribution. The contours show equal values from the joint distribution. Samples from the joint distribution are shown as the scatter plot whereas estimates of the marginal distributions for each variable are drawn on the corresponding axes

The joint, marginal and conditional distributions are related by the following formula

$$\begin{aligned} f_{X_{1}| X_{2}}(x_{1}|x_2) = \frac{f_{X_{1}, X_{2}}(x_{1}, x_2)}{f_{X_{2}}(x_2)}. \end{aligned}$$
(3.16)

As a simple illustrative example, consider randomly sampling from a bag containing ten identical-looking balls, each with a unique number, one to ten, written on them. Since each ball has an equal chance of being sampled then the probability of drawing any of the balls is the same, 1/10. However the conditional probability of drawing a three given that we know that the ball has an odd value written on it is 1/5.

figure b

3.4 Nonparametric Distribution Estimates

In this section basic nonparametric methods for estimating and understanding the distributions from available observations are introduced. Some of these, such as kernel density estimation methods, will be used as building blocks for some of the probabilistic forecasts in Chap. 11. These type of methods are useful when the data is known not to come from a specific parametric family of distributions, e.g. Gaussian, or Gamma (see Sect. 3.1).

One of the most common methods for estimating the PDF is through a histogram. A histogram is simply a count of the number of observations within some predefined discrete partitions (called bins) of a variable. Bins are defined by dividing the space into discrete groups. For a univariate random variables these are just intervals [ab], for multivariate data these are regions defined by intervals, e.g. \([a_1, b_1] \times [a_2, b_2], \times \cdots \times [a_N, b_N]\). Often the histograms are restricted to univariate and bivariate data due to the difficulty of visualising higher dimensions. Each bin is usually of equal size but this is not necessarily required. An example of a histogram using 20 equally spaced bins is shown in Fig. 3.6a.

Fig. 3.6
figure 6

Examples of estimating a distribution from 200 observations using a a histogram with 20 equally spaced bins and b a kernel density estimate with two different bandwidths

A limitation of the histogram approach is the dependence on the position and size of the bins and small adjustments to them can change the shape of the plot significantly. Further, the histogram is a discrete representation of a continuous variable which means some information is lost when binning. A preferable, but slightly more complicated estimator is kernel density estimation (KDE). KDEs are summations of small smooth functions K, called kernels, which are defined around each observation \(x_k\) of the random variable X to contribute to the overall PDF and are can be written as:

$$\begin{aligned} g(X) = \frac{1}{Nh} \sum _{k=1}^{N} K \left( \frac{X-x_k}{h} \right) , \end{aligned}$$
(3.17)

where h is the bandwidth hyperparameter. There are a variety of kernels but one of the most common is the Gaussian kernel defined as

$$\begin{aligned} K(x) = \frac{1}{\sqrt{2\pi }} \exp \left( -\frac{1}{2}x\right) . \end{aligned}$$
(3.18)

The most important parameter is the bandwidth h which determines the smoothness of the final distribution. The larger the h the smoother the final estimate. The optimal value of this parameter is often found through cross-validation methods (see Sect. 8.2) although there are sometimes rules of thumb used when there is assumptions about the underlying shape of the distribution. A representation of a KDE for two different bandwidths is shown in Fig. 3.6b (for the same data as in Fig. 3.6a). Notice that if the bandwidth is too small then the KDE will overfit to individual observations (see Sect. 8.1.3). In contrast, a bandwidth which is too large will mean features are lost due to underfitting to the observations. Extensions to KDEs to generate probabilistic forecasts will be explored in Sect. 11.5.

Now consider estimation of the CDF for a univariate distribution with samples \(x_1, x_2, \ldots , x_N\). The CDF can be easily estimated using the empirical cumulative distribution function (ECDF) defined by

$$\begin{aligned} \hat{G}_X(x) = \frac{\text {number observations less than } x}{N}=\frac{1}{N}\sum _{k=1}^N \textbf{1}_{x_k<x}, \end{aligned}$$
(3.19)

where \(\textbf{1}_S\) is the indicator function which takes the value 1 if the statement S is true and 0 otherwise. In this case the statement is whether the observation is less than x. In other words, the empirical CDF simply counts the proportion of observations less than a particular value. An example of the CDF for the standard Normal and the corresponding empirical CDF (for 20 randomly sampled points) is shown in Fig. 3.7. Notice the Empirical CDF jumps at every point observed and the more observations available the closer the approximation is to the true CDF.

Fig. 3.7
figure 7

Example of empirical CDF with true CDF for the standard normal distribution. The ECDF was generated using 20 randomly drawn points from the true CDF

The quantiles (Sect. 3.2) can also be estimated from a finite sample of points. Suppose the observations are ordered, i.e. the samples \(x_1, x_2, \ldots , x_N\) are such that \(x_k < x_{k+1}\) for \(k=1, \ldots , N-1\) (These are also known as order statistics). Then the q-sample quantile, for \(q \in (0,1)\) is defined as the closest \(x_k\) where k rounds to qN.

Fig. 3.8
figure 8

Example of a box plot for two data sets. Data set 1 is the same as that used in Fig. 3.6. Data set two is just a simple Gaussian with mean and standard deviation equal to 0.5

Of course the PDF estimate created from the KDE can also be easily turned into a CDF estimate using the definition of the CDF itself (Sect. 3.1). However, since often the kernels are distributions themselves (as with the Gaussian) means that the CDF is simply the sum of the CDFs of each kernel.

Sometimes it is not necessary to visualise the entire distribution of points and a good general impression of the distribution can be understood from a few points. A boxplot gives a visualisation of a few summary statistics of a distribution. An example of a box plot for two data sets is shown in Fig. 3.8 where the first data set is the same as that shown in Fig. 3.6, whilst data set 2 is a simple Gaussian distribution with mean and standard deviation equal to 0.5. What is included in a box plot can vary but they all typically show the following things:

  • A centralised value which is given by a line within the main box. In the boxplot in Fig. 3.8 this is the median and is given by the red line.

  • The first and third quartile which are given by the bottom and the top of a box.

  • Whiskers which show the span of the points to the smallest and largest values (often this does not include points considered outliers). These are the dotted lines in Fig. 3.8.

  • Outlier values defined as those which are more than 1.5 times the interquartile range from the top or bottom of the box. These are given by red crosses in the plot.

The box plot, although relatively simple can be used to generate some insight to the data. Firstly it gives a very basic representation of the spread of the data, including where the middle \(50\%\) of the data lies. The comparison of the box plot for the two data sets can also indicate whether there is significant overlap between the data. Finally, if the median line is not in the centre of the box then this indicates skewness in the data.

figure c

3.5 Sample Statistics and Correlation

The expected value and variance are important values associated to distributions and are often used as estimates of ‘typical’ values and uncertainty respectively. However, in practice the distribution is not known and important features of a distribution can only be estimated from the available observations. Suppose \(x_1, \ldots , x_N\) is a sample of observations of a univariate continuous random variable X, and each of the samples is independent (i.e. none of the samples are dependent on each other). A population estimate for the mean is the sample mean, defined as

$$\begin{aligned} \hat{\mu } = \frac{1}{N} \sum _{k=1}^N x_k. \end{aligned}$$
(3.20)

Similarly there is the sample variance

$$\begin{aligned} \hat{\sigma ^2} = \frac{1}{N-1} \sum _{k=1}^{N}( x_k-\hat{\mu })^2, \end{aligned}$$
(3.21)

which is divided by \(N-1\) rather than N, this is to ensure the estimator is unbiased.Footnote 3 As in Sect. 3.1 the sample standard deviation is the square root of the variance \(\sigma \). Other important measures of central tendency include the median (the 0.5-quantile) and the inter-quartile range (the difference between the 0.75 and 0.25 quantiles). These values also tend to be more robust (i.e. are less sensitive) to outliers than the mean and variance.

In Sect. 3.3 the concept of covariance and correlation between random variables was introduced for continuous random variables for when the distributions are known. Consider observations \((x_{k, 1}, x_{k, 2})\), \(k=1, \ldots , N\) for the bivariate random variables \((X_1,X_2)\), then the sample Pearson correlation can be defined as

$$\begin{aligned} \rho =\frac{\sum _{k=1}^N (x_{k, 1} -\bar{x}_1)(x_{k, 2} -\bar{x}_2)}{\sqrt{\sum _{k=1}^N (x_{k, 1} -\bar{x}_1)^2}\sqrt{\sum _{k=1}^N (x_{k, 2} -\bar{x}_2)^2}}, \end{aligned}$$
(3.22)

where \(\bar{x}_1, \bar{x}_2\) are the sample means for random variables \(X_1\) and \(X_2\) respectively.

In addition to the Pearson’s correlation, another common measure of correlation is Spearmans rank correlation coefficient. This is defined as simply the Pearson correlation of the rank of the values in the two random variables. In other words, take the observations \(x_{1, 1}, x_{2, 1}, \ldots , _{N, 1}\) for random variable \(X_{1}\) and assign them based on their rank, i.e. value 1 for the largest value, 2 for the second largest and so on. Similarly do the same for the second random variable, \(X_2\). Then simply calculate the Pearson correlation of these rankings using Eq. (3.22).

The concept of correlation can be expanded to a univariate time series (Sect. 5.1), i.e. \((L_{1}, L_2, \ldots , L_{N})^T\) where \(L_t\) is a single value at time t and the points are ordered consecutively in time. First consider the autocovariance and the autocorrelation—i.e. the (Pearson) correlation between values in the time series and its lagged values. The sample autocovariance function at lag k is defined as

$$\begin{aligned} R(k) = \frac{1}{N} \sum _{t=1}^{N-k} (L_t-\hat{\mu })(L_{t+k} - \hat{\mu }), \end{aligned}$$
(3.23)

where \(\hat{\mu }\) is the sample mean of the time series \((L_{1}, L_2, \ldots , L_{N})^T\). Similarly the autocorrelation function (ACF) at lag k is defined as

$$\begin{aligned} \rho (k) = \frac{1}{N\hat{\sigma }^2} \sum _{t=1}^{N-k} (L_i - \hat{\mu })(L_{t+k} - \hat{\mu }) = \frac{R(k)}{R(0)}= \frac{R(k)}{\hat{\sigma }^2}, \end{aligned}$$
(3.24)

where \(\hat{\sigma }^2\) is the sample variance.

A plot of the autocorrelation at several consecutive lags \(\rho (0), \rho (1), \ldots \) is a common way to better identify interdependencies of a time series to its lagged values as well as to identify important features such as seasonalities. They are commonly used to identify the correct orders for the ARIMA models (see Sect. 9.4). The autocorrelation is bounded \(-1 \le \rho (k) \le 1\) for \(k \in \mathbb {N}\) with \(\rho =1\) indicating a perfect positive correlation and \(\rho =-1\) a perfect negative correlation. No correlation whatsoever is indicated by \(\rho =0\).

Fig. 3.9
figure 9

Example of autocorrelation (top) and partial autocorrelation (bottom) plots for the same time series

The Partial autocorrelation function (PACF) evaluated at lag k describes the autocorrelation between the series \(L_t\) and the lagged series \(L_{t+k}\) conditional on the in between values \(L_{t+1}, \ldots , L_{t+k-1}\). In other words it describes the autocorrelation after removing the effects at shorter lags.

An example of an autocorrelation and partial autocorrelation plot for 30 lags is shown in Fig. 3.9. Typically such plots also include two horizontal lines which indicate the level at which significant correlations exist. Values outside of the area defined by these lines indicate significant correlation values (although of course sometimes values can be outside these lines by chance alone).

The ACF plot shows some strong autocorrelations at lags 6, 12, 18, 24 but also other lags in between. The PACF plot also shows the same strong lags at periods of 6 however, many of the other correlations are much smaller (for example at lags 3, 4 and 5) in the PACF plot compared to the ACF plot, which indicates that the influence of the other lags may have suggested a stronger autocorrelation than truly existed in the time series.

The cross-correlation is an extension of autocorrelation except between two different time series. To illustrate this, consider a second time series \(M_1, M_2, \ldots , M_N\), defined at the same time steps as \((L_{1}, L_2, \ldots , L_{N})^T\). The cross-correlation between L and M at lag \(k\ge 0\) can be defined as

$$\begin{aligned} \text {XR}(k) = \frac{1}{N} \sum _{t=1}^{N-k} (L_t - \hat{\mu }_1)(M_{t+k} - \hat{\mu }_2), \end{aligned}$$
(3.25)

and for \(k \le 0\) as

$$\begin{aligned} \text {XR}(k) = \frac{1}{N} \sum _{t=1}^{N-k} (M_t - \hat{\mu }_2)(L_{t+k} - \hat{\mu }_1), \end{aligned}$$
(3.26)

where \(\hat{\mu }_1\) and \(\hat{\mu }_2\) are the sample means of the time series for \(L_t\) and \(M_t\) respectively. The value at \(k=0\) represents the correlation between the two series without any lags. The function not only identifies which lags are the most important but also the temporal direction. For example, the temperature could be a strong indicator of the demand used, however, if the nearest weather station is in the next town over then this recorded value may be related to the temperature in the town of interest but with a delay. In that case there will be a strong cross-correlation between the demand and the observed lagged temperature at the next town over.

figure d

3.6 Questions

  1. 1.

    Generate different size samples from a normal distribution with a mean and standard deviation of your choice. Plot these values against samples of different sizes. Do the values start to converge to the true values? At how many samples does this convergence begin? This is demonstrating the central limit theorem. How much does the mean change for small samples? Now plot the median? How much does this change for small samples? Does one seem more robust to additions of new data than the other?

  2. 2.

    For the standard normal distribution, how much data lies between (a) one standard deviation, \(\sigma \), (b) two standard deviations, (c) three standard deviations, from the centre?

  3. 3.

    For a standard normal distribution, which quantiles most closely approximate the values one standard deviation from the centre?

  4. 4.

    If a distribution is not symmetric what does the difference between the mean and median tell you about the skewness of the distribution?

  5. 5.

    Sample from a univariate distribution of your choice and plot the histogram (using the default bin size). Change the bin sizes and number of samples and compare them to the original distribution. What appears to be a good bin size for one of your sampling sets (assuming the sample size is big enough)?

  6. 6.

    Sample from the same univariate distribution as in the previous question. Plot a kernel density estimate with different bandwidths and different sample sizes. Observe the effects, what appears to be a good choice of bandwidth for one of your sample sets (make sure the number of samples is big enough)?

  7. 7.

    Plot the box plots for three datasets, with each data set generated from 1000 samples from three different univariate distributions (perhaps use the Gamma, lognormal and Gaussian distributions described in the chapter). Adjust the parameters of each model so the plots can all be seen clearly on the figure. Compare these plots to the original distributions: How does the central value vary across the box plots? What about the whiskers and the box sizes?

  8. 8.

    Plot a bivariate Gaussian distribution with unit variance on both variables. Change the correlation between the plots and see how they change. How does the marginal distribution change for each variable as you change the correlation?

  9. 9.

    For the same bivariate distribution as the previous question. Consider the conditional distribution for one of the variables by binning samples of the data contained within a small interval of the other variable. Plot the histogram of these points. How does this change as you change the position of the interval?

  10. 10.

    Download some energy time series data (See Sect. D.4). Plot the autocorrelation and partial autocorrelations for a few of these time series (only consider lags for up to a couple of weeks). Where are the major correlation values? What is the difference between the autocorrelation plots and the partial autocorrelation plots? Compare these values to plots of the time series. Are there obvious seasonal patterns, and how do they relate to the autocorrelation plots?

  11. 11.

    Download the GEFCOM 2014 data or alternatively download any other demand time series data which also has some temperature data (See Sect. D.4 for a list of data sets). Plot one of the demand series against the temperature data in a scatter plot. What does the relationship look like? Is there a clear affect of temperature on this data? Calculate the cross-correlation plot for the demand and temperature data. Where are the major correlations. Is there some strong correlations at lagged values?