1 Introduction

Regime-switching models have attracted a lot of attention in the recent years. A flexible specification allowing for abrupt changes in model dynamics has led to its popularity not only in econometrics (Choi 2009; Hamilton 1996b; Lux and Morales-Arias 2010), but also in other fields as diverse as traffic modeling (Cetin and Comert 2006), population dynamics (Luo and Mao 2007), river flow analysis (Vasas et al. 2007) or earthquake counts (Bulla and Berzel 2008). This paper is motivated by yet another stream of literature: electricity spot price models in energy economics (Bierbrauer et al. 2007; De Jong 2006; Erlwein et al. 2010; Huisman and de Jong 2003; Janczura and Weron 2010, 2012; Karakatsani and Bunn 2008, 2010; Mari 2008; Misiorek et al. 2006; Weron 2009). Regime-switching models have seen extensive use in this area due to their relative parsimony (a prerequisite in derivatives pricing) and the ability to capture the unique characteristics of electricity prices (in particular, the spiky and non-linear price behavior). While the existence of distinct regimes in electricity prices is generally unquestionable (being a consequence of the non-linear, heterogeneous supply stack structure in the power markets, see e.g., Eydeland and Wolyniec 2012; Weron 2006), the actual goodness-of-fit of the models requires statistical validation.

However, recent work concerning the statistical fit of regime-switching models has been mainly devoted to testing parameter stability versus the regime-switching hypothesis. Several tests have been constructed for the verification of the number of regimes. Most of them exploit the likelihood ratio technique (Cho and White 2007; Garcia 1998), but there are also approaches related to recurrence times (Sen and Hsieh 2009), likelihood criteria (Celeux and Durand 2008) or the information matrix (Hu and Shin 2008). Specification tests to detect autocorrelation and ARCH effects were proposed by (Hamilton (1996a), based on the score function technique) and more recently by (Smith (2008), utilizing the Rosenblatt transformation; see also Sect. 3.3). Smith found that the performance of the Ljung–Box test improved when used on the normally distributed Rosenblatt transformation. However, the considered Markov regime-switching models were relatively simple and had two states differing only in mean. Interestingly, the Rosenblatt transformation was used earlier for evaluating density forecasts of regime-switching (Berkowitz 2001; Diebold et al. 1998; Haas et al. 2004) and stochastic volatility models (Kim et al. 1998), typically in a risk management context.

On the other hand, to our best knowledge, procedures for goodness-of-fit testing of the marginal distribution of regime-switching models have not been derived to date (with the exception of Janczura and Weron 2009, where an ewedf-type test was introduced in the context of electricity spot price models, see Sect. 3.2.1 for details). With this paper we want to fill the gap and propose empirical distribution function (edf) based testing procedures built on the Kolmogorov–Smirnov test, that are dedicated to regime-switching models with observable as well as latent state processes. In contrast to the approaches based on the Rosenblatt transformation, the techniques proposed in this paper allow for testing the fit in the individual regimes as well as of the whole model. This can be advantageous in many situations as we additionally obtain information on which regimes are correctly and which are incorrectly specified. The derivation of the tests is not straightforward and in the case of a latent state process requires an application of the concept of the weighted empirical distribution function (wedf). Finally, we should also note, that the term “marginal distribution” does not mean that the proposed tests do not account for the dynamic regime structure. On the contrary, the dynamical structure is considered when constructing residuals used in the testing procedures.

The paper is structured as follows: in Sect. 2 we describe the structure of the analyzed regime-switching models and briefly explain the estimation process (for details we refer to an article recently published in AStA; Janczura and Weron 2012). In Sect. 3, we introduce goodness-of-fit testing procedures appropriate for regime-switching models both with observable and latent state processes. Next, in Sect. 4 we provide a simulation study and check the performance of the proposed techniques. Since the motivation for this paper comes from the energy economics literature, in Sect. 5 we show how the presented testing procedure can be applied to verify the fit of Markov regime-switching models to electricity spot prices. Finally, in Sect. 6 we conclude and outline future work.

2 Regime-switching models

2.1 Model definition

Assume that the observed process \(X_t\) may be in one of \(L\) states (regimes) at time \(t\), dependent on the state process \(R_t\):

$$\begin{aligned} X_t=\left\{ \begin{array}{ccc} X_{t,1}&\text{ if}&R_t=1,\\ \vdots&\vdots&\vdots \\ X_{t,L}&\text{ if}&R_t=L. \end{array}\right. \end{aligned}$$
(1)

Possible specifications of the process \(R_t\) may be divided into two classes: those where the current state of the process is observable (like threshold models, e.g., TAR, SETAR) and those where it is latent. Probably, the most prominent representatives of the second group are the hidden Markov models (HMM; for a review see, e.g., Cappe et al. 2005) and their generalizations allowing for temporal dependence within the regimes—the Markov regime-switching models (MRS). Like in HMMs, in MRS models \(R_t\) is assumed to be a Markov chain governed by the transition matrix \(\mathbf P \) containing the probabilities \(p_{ij}\) of switching from regime \(i\) at time \(t\) to regime \(j\) at time \(t+1\), for \(i,j=\{1,2,\ldots ,L\}\):

$$\begin{aligned} \mathbf P =(p_{ij})= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} p_{11}&p_{12}&\ldots&p_{1L} \\ p_{21}&p_{22}&\ldots&p_{2L} \\ \vdots&\vdots&\ddots&\vdots \\ p_{L1}&p_{L2}&\ldots&p_{LL} \\ \end{array} \right), ~~ \text{ with} ~~ p_{ii}=1-\sum \limits _{j\ne i} p_{ij}. \end{aligned}$$
(2)

The current state \(R_t\) at time \(t\) depends on the past only through the most recent value \(R_{t-1}\). The probability of being in regime \(j\) at time \(t+m\) starting from regime \(i\) at time \(t\) is given by

$$\begin{aligned} P(R_{t+m} = j\mid R_t = i) = (\mathbf P ^{\prime })^m \cdot e_i, \end{aligned}$$
(3)

where \(\mathbf P ^{\prime }\) denotes the transpose of \(\mathbf P \) and \(e_i\) is the \(i\)th column of the identity matrix. In general, \(L\) regime models can be considered. However, for clarity of exposition we limit the discussion in this paper to two regime models only. Note, that this is not a very restrictive limitation—at least in the context of modeling electricity spot prices—since typically two or three regimes are enough to adequately model the dynamics (Janczura and Weron 2010; Karakatsani and Bunn 2010). Nonetheless, all presented results are also valid for \(L>2\).

The definitions of the individual regimes can be arbitrarily chosen depending on the modeling needs. Again for the sake of clarity, in this paper we focus only on two commonly used in the energy economics literature specifications of MRS models (Ethier and Mount 1998; De Jong 2006; Hirsch 2009; Huisman and de Jong 2003; Janczura and Weron 2010; Mari 2008). The first one (denoted by I or type I) assumes that the process \(X_t\) is driven by two independent regimes: (1) a mean-reverting AR(1) process:

$$\begin{aligned} X_{t,1}=\alpha + (1-\beta )X_{t-1,1} +\sigma \epsilon _t \end{aligned}$$
(4)

with \(0<\beta <1\) and \(\sigma >0\), where the residuals \(\epsilon _t\)s are independent, \(F^1\)-distributed (in the following we assume that \(F^1\) is the standard Gaussian cdf) and (2) an i.i.d. sample from a specified continuous, strictly monotone distribution \(F^2\):

$$\begin{aligned} X_{t,2}\sim F^2(x). \end{aligned}$$
(5)

Observe that in such a model specification the values of the first regime \(X_{t,1}\) become latent when the process is in the second state and they do not depend on the realization (trajectory) of the second regime. Such a specification, though computationally challenging, is useful for modeling processes with radically different dynamics in the individual regimes. For instance, in wholesale power markets the price spikes or price drops (i.e., negative price spikes) are typically driven by unexpected changes of market conditions, like a generation outage or a severe heat wave (in case of price spikes) or favorable wind conditions combined with low consumption (in case of price drops). Price spikes occur due to lack of storage capabilities and limited flexibility to respond to sudden changes in supply and/or demand for electricity. When the reason for the spike is over the prices move back to the normal (or base) level, usually irrespective of the magnitude of the extreme prices a few hours or days earlier (Eydeland and Wolyniec 2012; Huisman 2009; Weron 2006). For an example of such a behavior see Fig. 1 in Sect. 5.

Fig. 1
figure 1

Mean daily (baseload) day-ahead spot prices from the New England Power Pool SEMASS area (NEPOOL; US) from the 5-year period January 2, 2006–January 2, 2011

In the second specification (denoted by II or type II) \(X_t\) is described by an AR(1) process having different parameters in each regime, namely:

$$\begin{aligned} X_{t}=\alpha _{R_t}+ (1-\beta _{R_t})X_{t-1} +\sigma _{R_t} \epsilon _t, \quad R_t\in \{1,2\}, \end{aligned}$$
(6)

where the residuals \(\epsilon _t\)s are independent, \(N(0,1)\)-distributed random variables. Again, we assume that \(0<\beta _i<1\) and \(\sigma _i>0\).

2.2 Estimation

Estimation of regime-switching models with an observable state process boils down to the problem of independently estimating parameters in each regime. In case of MRS models, though, the estimation process is not straightforward, since the state process is latent and not directly observable. We have to infer the parameters and state process values at the same time. In this paper, we use a variant of the Expectation–Maximization (EM) algorithm that was first applied to MRS models by Hamilton (1990) and later refined by Kim (1994). It is a two-step iterative procedure, reaching a local maximum of the likelihood function:

  • Step 1 Denote the observation vector by \(\mathbf x_T =(x_1,x_2,\ldots ,x_T)\). For a parameter vector \(\theta ^{(n)}\) compute the conditional probabilities \(P(R_t = i|\mathbf x_T ;\theta ^{(n)})\)—the so called ‘smoothed inferences’—for the process being in regime \(i\) at time \(t\).

  • Step 2 Calculate new and more exact maximum likelihood estimates \(\theta ^{(n+1)}\) using the log-likelihood function, weighted with the smoothed inferences from Step 1, i.e.,

    $$\begin{aligned} \log \left[L(\theta ^{(n+1)})\right]=\sum _{i=1}^2\sum _{t=1}^T P(R_t = i|\mathbf x_T ;\theta ^{(n)})\log \left[f_i(x_t|\mathbf x_{t-1} ;\theta ^{(n+1)})\right], \end{aligned}$$

    where \(f_i(x_t|\mathbf x_{t-1} ;\theta ^{(n+1)})\) is the conditional density of the \(i\)th regime, and update the transition probabilities:

    $$\begin{aligned} p^{(n+1)}_{ij}= \frac{\sum _{t=2}^T P(R_t =j, R_{t-1}=i|\mathbf x _{T};\theta ^{(n)})}{\sum _{t=2}^{T}P(R_{t-1}=i|\mathbf x _{T};\theta ^{(n)})}. \end{aligned}$$

For a detailed description of the estimation procedure see the original paper of Kim (1994) or a recent article of Janczura and Weron (2012), where an efficient algorithm for MRS models of type I is presented.

3 Goodness-of-fit testing

In this section, we introduce a goodness-of-fit testing technique, that can be applied to evaluate the fit of regime-switching models. It is based on the Kolmogorov–Smirnov (K-S) goodness-of-fit test and verifies whether the null hypothesis \(H_0\) that observations come from the distribution implied by the model specification cannot be rejected. The procedure can be easily adapted to other empirical distribution function (edf) type tests, like the Anderson–Darling test.

3.1 Testing in case of an observable state process

 

3.1.1 Specification I

In this case the hypothesis \(H_0\) states that the sample \((x_1,x_2,\ldots ,x_T)\) is generated from a regime-switching model with two independent regimes defined as: an AR(1) process (first regime) and i.i.d. \(F^2\)-distributed random variables (second regime). Provided that the values of the state process \(R_t\) are known, observations can be split into separate subsamples related to each of the regimes. Namely, subsample \(i\) consists of all values \(X_t\) satisfying \(R_t=i\). The regimes are independent from each other, but the i.i.d. condition must be satisfied within the subsamples themselves. Therefore, the mean-reverting regime observations are substituted by their respective residuals. Precisely, the following transformation is applied to each pair of consecutive AR(1) observations in regime \(R_t=1\):

$$\begin{aligned} h(x,y,k)=\frac{x-(1-\beta )^ky-\alpha \frac{1-(1-\beta )^k}{\beta }}{\sigma \sqrt{\frac{1-(1-\beta )^{2k}}{1-(1-\beta )^2}}}, \end{aligned}$$
(7)

where \((k-1)\) is the number of latent observations from the mean-reverting regime (or equivalently the number of observations from the second regime that occurred between two consecutive AR(1) observations) and \(\alpha \), \(\beta \) and \(\sigma \) are the model parameters, see (4). It is straightforward to see that if \(H_0\) is true, transformation \(h(x_{t+k,1},x_{t,1},k)\) applied to consecutive observations from the mean-reverting AR(1) regime leads to a sample \((y_1^1,y_2^1,\ldots ,y_{n_1}^1)\) of independent and conditionally \(N(0,1)\)-distributed random variables. Note, that from now on we use the following notation. The original observed sample is denoted by \((x_1,x_2,\ldots ,x_T)\). The i.i.d. (or conditionally i.i.d. in Sect. 3.2) samples in each of the regimes are denoted by \((y_1^1,y_2^1,\ldots ,y_{n_1}^1)\) and \((y_1^2,y_2^2,\ldots ,y_{n_2}^2)\), with \(n_1+n_2=T-1\). Note, that for the mean-reverting regime these samples are obtained by applying transformation (7).

Further, observe that transformation \(h(X_{t+k,1},X_{t,1},k)\) is based on subtracting the conditional mean from \(X_{t+k,1}\) and standardizing it with the conditional variance. Indeed, \((1-\beta )^k X_{t,1} + \alpha \frac{1-(1-\beta )^k}{\beta }\) is the conditional expected value of \(X_{t+k,1}\) given \((X_{1,1},X_{2,1},\ldots ,X_{t,1})\) and \(\sigma ^2\frac{1-(1-\beta )^{2k}}{1-(1-\beta )^2}\) is the respective conditional variance.

Transformation (7) ensures that the subsample containing observations from the mean-reverting regime is i.i.d. Since the second regime is i.i.d. by definition, standard goodness-of-fit tests based on the empirical distribution function (like the Kolmogorov–Smirnov or Anderson–Darling tests, see e.g., D’Agostino and Stevens 1986) can be applied to each of the subsamples. Recall that the Kolmogorov–Smirnov test statistic is given by:

$$\begin{aligned} D_n=\sqrt{n}\sup _{x\in \mathbb R }|F_n(x)-F(x)|, \end{aligned}$$
(8)

where \(n\) is the sample size, \(F_n\) is the empirical distribution function (edf) and \(F\) is the corresponding theoretical cumulative distribution function (cdf). Hence, having an i.i.d. sample \((y_1,y_2,\ldots ,y_n)\), the test statistic can be calculated as

$$\begin{aligned} d_n=\sqrt{n}\max _{1\le t\le n}\left|\sum _{k=1}^{n}\frac{1}{n}\mathbb I _{\{y_k\le y_t\}}-F(y_t)\right|, \end{aligned}$$
(9)

where \(\mathbb{I }\) is the indicator function.

The goodness-of-fit of the marginal distribution of the individual regimes can be formally tested. For the mean-reverting regime, \(F\) is the standard Gaussian cdf and \((y_1,y_2,\ldots ,y_{n_1})\) is the subsample of the standardized residuals obtained by applying transformation (7), while for the second regime, \(F\) is the model-specified cdf (i.e., \(F^2\)) and \((y_1,y_2,\ldots ,y_{n_2})\) is the subsample of respective observations. Observe that the ‘whole model’ goodness-of-fit can be also verified, using the fact that for \(X\sim F^2\) we have that \(Y=(F^1)^{-1}[ F^2 (X) ]\) is \(F^1\)-distributed. Indeed, a sample \((y_1^{1},y_2^{1},\ldots ,y_{n_1}^{1},y_1^{2},y_2^{2},\ldots ,y_{n_2}^{2})\), where \(y_t^{1}\)s are the standardized residuals of the mean-reverting regime, while \(y_t^{2}\)’s are the transformed variables corresponding to the second regime, i.e., \(y_t^{2}=(F^1)^{-1}[ F^2 (x_{t,2}) ]\), is i.i.d. \(N(0,1)\)-distributed and, hence, the testing procedure is applicable.

3.1.2 Specification II

The \(H_0\) hypothesis now states that the sample \((x_1,x_2,\ldots ,x_T)\) is driven by a regime-switching model defined by Eq. (6) with \(R_t\in \{1,2\}\). Similarly, as in the independent regimes case, the testing procedure is based on extracting the residuals of the mean-reverting process. Indeed, observe that under the \(H_0\) hypothesis the transformation \(h(x_t,x_{t-1},1)\), defined in (7), with parameters \(\alpha _{R_t}, \beta _{R_t}\) and \(\sigma _{R_t} \) corresponding to the current value of the state process \(R_t\), yields an i.i.d. \(N(0,1)\) distributed sample. Thus, the Kolmogorov–Smirnov test can be applied. The test statistic \(d_n\), see (9), is calculated with the standard Gaussian cdf and the sample \((y_1,y_2,\ldots ,y_T)\) of the standardized residuals, i.e., \(y_t=h(x_t,x_{t-1},1)\).

3.1.3 Critical values

Note, that the described above testing procedure is valid only if the parameters of the hypothesized distribution are known. Unfortunately in typical applications the parameters have to be estimated beforehand. If this is the case, then the critical values for the test must be reduced (Čižek et al. 2011). In other words, if the value of the test statistics \(d_n\) is \(d\), then the \(p\) value is overestimated by \(P(d_n \ge d)\). Hence, if this probability is small, then the \(p\) value will be even smaller and the hypothesis will be rejected. However, if it is large then we have to obtain a more accurate estimate of the \(p\) value.

To cope with this problem, Ross (2002) recommends using Monte Carlo simulations. In our case the procedure reduces to the following steps. First, the parameter vector \(\hat{\theta }\) is estimated from the dataset and the test statistic \(d_n\) is calculated according to formula (9). Next, \(\hat{\theta }\) is used as a parameter vector for \(N\) simulated samples from the assumed model. For each sample the new parameter vector \(\hat{\theta }_i\) is estimated and the new test statistic \(d_n^i\) is calculated using formula (9). Finally, the \(p\) value is obtained as the proportion of simulated samples with the test statistic values higher or equal to \(d_n\), i.e., \(\textit{p} { \text{ value}}=\frac{1}{N}\#\{i: d_n^i\ge d_n\}\).

3.2 Testing in case of a latent state process

3.2.1 The ewedf approach

Now, assume that the sample \((x_1,x_2,\ldots ,x_T)\) is driven by an MRS model. The regimes are not directly observable and, hence, the standard edf approach can be used only if an identification of the state process is performed first. Recall that, as a result of the estimation procedure described in Sect. 2.2, the so called ‘smoothed inferences’ about the state process are derived. The smoothed inferences are the probabilities that the \(t\)th observation comes from a certain regime given the whole available information \(P(R_t=i|x_1,x_2,\ldots ,x_T)\). Hence, a natural choice is to relate each observation with the most probable regime by letting \(R_t=i\) if \(P(R_t=i|x_1,x_2,\ldots ,x_T)>0.5\). Then, the testing procedure described in Sect. 3.1 is applicable. However, we have to mention, that the hypothesis \(H_0\) now states that \((x_1,x_2,\ldots ,x_T)\) is driven by a regime-switching model with known state process values. We call this approach ‘ewedf’, which stands for ‘equally-weighted empirical distribution function’. It was introduced by Janczura and Weron (2009) in the context of electricity spot price MRS models.

3.2.2 The weighted empirical distribution function (wedf)

In the standard goodness-of-fit testing approach based on the edf each observation is taken into account with weight \(\frac{1}{n}\) (i.e., inversely proportional to the size of the sample). However, in MRS models the state process is latent. The estimation procedure (the EM algorithm) only yields the probabilities that a certain observation comes from a given regime. Moreover, in the resulting marginal distribution of the MRS model each observation is, in fact, weighted with the corresponding probability. Therefore, a similar approach should be used in the testing procedure.

For this reason, we introduce here the concept of the weighted empirical distribution function (wedf):

$$\begin{aligned} F_n(x)=\sum \limits _{t=1}^n\frac{w_t\mathbb{I }_{\{y_t<x\}}}{\sum _{t=1}^nw_t}, \end{aligned}$$
(10)

where \((y_1,y_2,\ldots ,y_n)\) is a sample of observations and \((w_1,\ldots ,w_n)\) are the corresponding weights, such that \(0 \le w_t\le M\), \(\forall _{t=1,\ldots ,n}\). It is interesting to note, that the notion of the weighted empirical distribution function appears in the literature in different contexts. Maiboroda (1996, 2000) applied it to the problem of estimation and testing for homogeneity of components of mixtures with varying coefficients. Withers and Nadarajah (2010) investigated properties of distributions of smooth functionals of \(F_n(x)\). In both approaches the weights were assumed to fulfill the condition \(\sum _{t=1}^nw_t=n\). A different choice of weights was used by Huang and Brill (2004). They proposed the level-crossing method to find weights improving efficiency of the edf in the distribution tails. Yet another approach employing the weighted distribution is the generalized (weighted) bootstrap technique, see e.g., Haeusler et al. (1991), where specified random weights are used to improve the resampling method.

However, to our best knowledge, none of the applications of wedf is related to goodness-of-fit testing of Markov regime-switching models. Here, we use the wedf concept to deal with the case when observations cannot be unambiguously classified to one of the regimes and, hence, a natural choice of weights of wedf seems to be \(w_t=P(R_t=i|x_1,x_2,\ldots ,x_T)=E(\mathbb{I }_{\{R_t=i\}}|x_1,x_2,\ldots ,x_T)\) for the \(i\)th regime observations.

3.2.3 The wedf approach for specification II

First, let us focus on the parameter-switching specification. The \(H_0\) hypothesis states that the sample \(\mathbf x _T=(x_1,x_2,\ldots ,x_T)\) is driven by the MRS model defined by equation (6). Assume that \(H_0\) is true and the model parameters are known. Like in the observable state process case, the test cannot be applied directly to the observed sample. Let \(y_t^i\)s be the transformed variables corresponding to the \(i\)th regime, i.e., \(y_t^i\)s are obtained as \(y_t^i=[x_{t+1}-\alpha _i-(1-\beta _i)x_t]/\sigma _i\). Observe that if \(R_t=i\), then \(y_t^i\) becomes the residual of the \(i\)th regime and, hence, has the standard Gaussian distribution. The weighted empirical distribution function (wedf) is then given by:

$$\begin{aligned} F_n(x)=\frac{1}{n}\sum \limits _{t=1}^n\left[P(R_t=1|\mathbf x _T)\mathbb{I }_{\{y_t^1<x\}}+P(R_t=2|\mathbf x _T)\mathbb{I }_{\{y_t^2<x\}}\right], \end{aligned}$$
(11)

where \(n\) is the size of the sample (here \(n=T-1\)). Let \(\mathfrak R \) be the \(\sigma \)-algebra generated by the state process \(\{R_t\}_{t=1,2,\ldots ,T}\), i.e., the state process history up to time \(T\). Observe that the elements of the sum in (11) are conditionally independent given \(\mathfrak R \). Indeed, if for a given \(t\), \(R_t=i\) then the \(t\)th component of the sum becomes \(\mathbb{I }_{\{y_t^i<x\}}\) and \(y_t^i\)’s given \(R_t=i\) form an i.i.d. \(N(0,1)\)-distributed sample. Moreover, the following lemma ensures that the true cdf of the residuals can be approximated by the wedf.

Lemma 1

If \(H_0\) is true, then \(F_n\) given by (11) is an unbiased, consistent estimator of the distribution of the residuals (in this case Gaussian).

Note, that proofs of all lemmas and theorems formulated in this section can be found in the Appendix.

The following theorem yields a version of the K-S test applicable to parameter-switching MRS model (6). Note, that if the state process was observable, it would boil down to the standard K-S test (Lehmann and Romano 2005, p. 584).

Theorem 1

Let \(F_n\) be given by (11) and \(F\) be the standard Gaussian cdf. If \(H_0\) is true and the model parameters are known, then the statistic

$$\begin{aligned} D_n=\sqrt{n}\sup \limits _{x\in \mathbb{R }}|F_n(x)-F(x)| \end{aligned}$$
(12)

converges (weakly) to the Kolmogorov–Smirnov distribution \(KS\) as \(n\rightarrow \infty \).

If hypothesis \(H_0\) is true then, by Theorem 1, the statistic \(D_n\) asymptotically has the Kolmogorov–Smirnov distribution. Therefore, if \(n\) is large enough, the following approximation holds

$$\begin{aligned} P(D_n\ge c|H_0)\approx P(\kappa \ge c), \end{aligned}$$
(13)

where \(\kappa \sim KS\) and \(c\) is the critical value. (Recall that \(y_t^i=[x_{t+1}-\alpha _i-(1-\beta _i)x_t]/\sigma _i\).) Hence, the \(p\) value for the sample \((y_1^1,y_2^1,\ldots ,y_n^1,y_1^2,y_2^2,\ldots ,y_n^2)\) can be approximated by \(P(\kappa \ge d_n)\), where

$$\begin{aligned} d_n=\sqrt{n}\max _{1\le t \le n }\max _{i=1,2}\left|F_n(y_t^i) -F(y_t^i)\right| \end{aligned}$$
(14)

is the test statistic. Note that, for a given value of \(d_n\), \(P(\kappa >d_n)\) is the standard Kolmogorov–Smirnov test \(p\) value, so that the K-S test tables can be easily applied in the wedf approach.

The above procedure is applicable to testing the distribution of the residuals of the (whole) model. A similar approach can be used for testing the distributions of the residuals of the individual regimes. Let the wedf for the \(i\)th regime be defined as:

$$\begin{aligned} F^i_n(x)=\sum \limits _{t=1}^n\frac{P(R_t=i|\mathbf x _T)\mathbb{I }_{\{y_t^i<x\}}}{\sum _{t=1}^n P(R_t=i|\mathbf x _T)}, \end{aligned}$$
(15)

where again \(y_t^i\)s are the transformed variables corresponding to the \(i\)th regime, i.e., \(y_t^i=[x_{t+1}-\alpha _i-(1-\beta _i)x_t]/\sigma _i\). Further, denote the theoretical distribution of the \(i\)th regime residuals (here Gaussian) by \(F^i\).

Lemma 2

If \(H_0\) is true, then \(F^i_n(x)\) given by (15) is an unbiased estimator of \(F^i(x)\). Moreover, it is consistent if \(\forall _{i,j=1,2}\) \(p_{ij}<1\).

An analogue of Theorem 1 can be derived.

Theorem 2

Let \(F_n\) be given by (15) and assume that \(R_t\) is an ergodic Markov chain. If \(H_0\) is true and the model parameters are known, then the statistic

$$\begin{aligned} D^i_n=\sqrt{w_n}\sup \limits _{x\in \mathbb{R }}|F^i_n(x)-F^i(x)|, \end{aligned}$$
(16)

where

$$\begin{aligned} w_n=\sum _{\{i_1,i_2,\ldots ,i_n\}\in I}\mathbb I _{\{R_1=i_1,R_2=i_2,\ldots ,R_n=i_n\}} \frac{\left[\sum _{\{k: i_k=i\}} \mathbb{I }_{\{R_k=i\}}\right]^2}{\sum _{\{k: i_k=i\}}\mathbb{I }^2_{\{R_k=i\}}} \end{aligned}$$
(17)

and \(I=\{(i_1,i_2,\ldots ,i_n):i_k\in \{1,2\}, k=1,2,\ldots ,n\}\) converges (weakly) to the Kolmogorov–Smirnov distribution \(KS\) as \(n\rightarrow \infty \).

Observe that \(\sqrt{w_n}\) can be approximated by \(\frac{\sum _{t=1}^{n} P(R_t=i|\mathbf x _T)}{\sqrt{\sum _{t=1}^{n} P^2(R_t=i|\mathbf x _T)}}\). Hence, for a sample of \((y_1^i,y_2^i,\ldots ,y_n^i)\) the test statistic is given by

$$\begin{aligned} d^i_n=\frac{\sum _{t=1}^{n} P(R_t=i|\mathbf x _T)}{\sqrt{\sum _{t=1}^{n} P^2(R_t=i|\mathbf x _T)}}\max _{1\le t \le n }\left|F^i_n(y_t^i) -F^i(y_t^i)\right| \end{aligned}$$
(18)

and the standard testing procedure can be applied.

3.2.4 The wedf approach for specification I

Now, assume that the sample \((x_1,x_2,\ldots ,x_T)\) is driven by the MRS model with independent regimes. The results of Theorems 1 and 2 can be applied, however, slight modifications of the tested sample(s) are required. First, observe that the values of the mean-reverting regime become latent, when the process is in the second state. As a consequence, the calculation of the conditional mean and variance, required for the derivation of the residuals, is not straightforward. We have:

$$\begin{aligned} E(X_{t,1}|\mathbf{{x}}_{t-1})&= \alpha +(1-\beta )E(X_{t-1,1}|\mathbf{{x}}_{t-1}),\\ {\text{ Var}}(X_{t,1}|\mathbf{{x}}_{t-1})&= (1-\beta )^2 {\text{ Var}}(X_{t-1,1}|\mathbf{{x}}_{t-1}) +\sigma ^2, \end{aligned}$$

where \(\text{ x}_{t-1}=(x_1,x_2,\ldots ,x_{t-1})\) is the vector of preceding observations. Therefore, the standardized residuals are given by the transformation:

$$\begin{aligned} g(X_{t,1},\mathbf{{x}}_{t-1}) = \frac{X_{t,1}-\alpha - (1-\beta )E(X_{t-1,1}|\mathbf{{x}}_{t-1})}{\sqrt{(1-\beta )^2 {\text{ Var}}(X_{t-1,1}|\mathbf{{x}}_{t-1})+\sigma ^2}}, \end{aligned}$$
(19)

where \(E(X_{t-1,1}|\mathbf{{x}}_{t-1})\) and \({\text{ Var}}(X_{t-1,1}|\mathbf{{x}}_{t-1})\) can be calculated using the following equalities:

$$\begin{aligned} E(X_{t,1}|\mathbf{{x}}_t)&= P(R_t=1|\mathbf{{x}}_t)x_t \nonumber \\&+P(R_t\ne 1|\mathbf{{x}}_t)\left[\alpha +(1-\beta )E(x_{t-1,1}|\mathbf{{x}}_{t-1})\right]\!,\end{aligned}$$
(20)
$$\begin{aligned} E(X_{t,1}^2|\mathbf{{x}}_t)&= P(R_t=1|\mathbf{{x}}_t)x_t^2 \nonumber \\&+P(R_t\ne 1|\mathbf{{x}}_t)\big [\alpha ^2 + 2\alpha (1-\beta )E(X_{t-1,1}|\mathbf{{x}}_{t-1})\nonumber \\&+(1-\beta )^2E \left(X_{t-1,1}^2|\mathbf{{x}}_{t-1}\right)+\sigma ^2\big ]. \end{aligned}$$
(21)

The latter formula is a consequence of the law of iterated expectation and basic properties of conditional expected values. Finally, the values \(P(R_t=1|\mathbf{{x}}_t)\) are calculated from the Bayes rule during the EM estimation procedure (see e.g., Kim 1994). Note that the transformed variables \((y_1^1,y_2^1,\ldots ,y_{T-1}^1)\), where \(y_t^1= g(x_{t,1},\mathbf{{x}}_{t-1})\), are \(\mathfrak R \)-independent and \(N(0,1)\)-distributed conditionally on \(\mathfrak R \).

Now, to test the fit of the mean-reverting regime, it is enough to calculate \(d^i_n\) according to formula (18) with the standard Gaussian cdf and \(y_t^1= g(x_{t},\mathbf{{x}}_{t-1})\). Observe, that the observations from the second regime are i.i.d. by definition, so the testing procedure is straightforward with \(F^2\) cdf and sample \((x_1,x_2,\ldots ,x_{T})\). Moreover, the ‘whole model’ goodness-of-fit can be also verified. Theorem 1 is directly applicable, if the distributions of the samples corresponding to both regimes are the same \(F=F^1=F^2\). Observe that, even if \(F^1\ne F^2\), the test still can be applied using the fact that for \(X\sim F^2\) we have that \(Y=(F^1)^{-1}[ F^2 (X) ]\) is \(F^1\)-distributed. The test statistic \(d_n\) is calculated as in (14) with \(F^1\) cdf (here Gaussian) and the sample \((y_1^{1},y_2^{1},\ldots ,y_{T-1}^{1},y_1^{2},y_2^{2},\ldots ,y_{T}^{2})\), where \((y_1^{1},y_2^{1},\ldots ,y_{T-1}^{1})\) are the transformed variables of the mean-reverting regime, i.e., \(y_t^{1}= g(x_{t,1},\mathbf{{x}}_{t-1})\), while \((y_1^{2},y_2^{2},\ldots ,y_{T}^{2})\) are the variables corresponding to the second regime, i.e., \(y_t^{2}=(F^1)^{-1}[ F^2 (x_t) ]\).

Note, that as in the case of an observable state process, in the wedf approach we face the problem of estimating values that are later used to compute the test statistic. Again, this problem can be circumvented with the help of Monte Carlo simulations. The \(p\) values can be computed as the proportion of simulated MRS model trajectories with the test statistic \(d_n\), see formulas (14) and (18), higher or equal to the value of \(d_n\) obtained from the dataset.

3.3 The Rosenblatt transformation

The new tests proposed in this section are compared with an approach utilizing the Rosenblatt (1952) transformation. The latter is based on the fact that if a sample \((x_1,x_2,\ldots ,x_T)\) is driven by a multivariate distribution \(F\), then the transformed variables \(y_t=F_t(x_t|\mathbf{{x}}_{t-1})\), where \(F_t\) is the corresponding conditional cdf, are independent and uniformly distributed. For the MRS models considered in this paper the transformation is given by

$$\begin{aligned} y_t=\sum _{i=1}^2 P(R_t = i|\mathbf{{x}}_{t-1})F_t^i(x_t|\mathbf{{x}}_{t-1}), \end{aligned}$$
(22)

where \(F_t^i\) is the conditional distribution of regime \(i\). For specification I, \(F_t^1\) is the normal cdf with mean and variance given by (), while \(F_t^2\) is the model defined, see (5), cdf of the second regime. For specification II, \(F_t^i\) is the normal cdf with mean \(\alpha _i+(1-\beta _i)x_{t-1}\) and variance \(\sigma ^2_i\). Since \((y_1,y_2,\ldots ,y_T)\) form an i.i.d. sample under \(H_0\), the standard edf-type tests—like the Kolmogorov–Smirnov—can be applied. In order to test for the same distribution in all testing approaches, we apply one more transformation. Namely, like Berkowitz (2001), Haas et al. (2004) or Smith (2008), we calculate \(\Phi ^{-1}(y_t)\) with \(\Phi \) being the standard normal cdf and obtain an independent \(N(0,1)\)-distributed sample.

The Rosenblatt transformation is a very useful and general tool, however, it can be used to test the goodness-of-fit of the whole model only. In contrast, the ewedf and the wedf approaches allow for testing the fit in the individual regimes. This can be advantageous in many situations as we additionally obtain the information which regimes are correctly and which are incorrectly specified. Moreover, the ewedf and the wedf approaches yield estimators of the regime and model cdfs, providing a readily available tool for further testing and model building. On the other hand, in case of the Rosenblatt transformation an empirical distribution function can only be constructed for the transformed variables making it hard to be interpreted.

4 Simulations

We now check the performance of the testing procedures introduced in Sect. 3. Due to space limitations, we focus on the more challenging case of a latent state process and consider four 2-regime MRS models defined in Table 1. The parameters of models Sim #1 and Sim #2 are chosen arbitrarily, while those of models Sim #3 and Sim #4 are estimated from the NEPOOL log-prices studied in Sect. 5. Furthermore, models Sim #1 and Sim #3 follow specification I, i.e., the first regime is driven by an AR(1) process, while the second regime is described by an i.i.d. sample of log-normally distributed random variables with parameters \(\alpha _2\) and \(\sigma _2^2\), i.e., \(LN(\alpha _2,\sigma _2^2)\). Recall, that a random variable \(X\) is log-normally distributed, \(X \sim LN(\alpha _2,\sigma _2^2)\), if \(\log (X) \sim \text{ N}(\alpha _2,\sigma _2^2)\), for \(X>0\). In order to apply the tests to the ‘whole model’ (and not only to the individual regimes) we transform the second regime values \(\{x_i\}\) obtaining an \(N(0,1)\)-distributed sample: \(\{[\log (x_i)-\alpha _2]/\sigma _2; i=1,\ldots ,T\}\). On the other hand, models Sim #2 and Sim #4 are simulated from the parameter-switching AR(1) model, i.e., follow specification II, see formula (6). Finally note that since the regimes of the considered models are not directly observable, the standard edf-based goodness-of-fit tests cannot be used.

Table 1 Parameters of four 2-regime MRS models analyzed in the simulation study of Sect. 4

4.1 Known model parameters

We generate 10,000 trajectories of each of the four 2-regime MRS models defined in Table 1. The length of each trajectory is 2,000 observations, which corresponds to 5.5 years of daily data (note, that markets for electricity operate 365 day per year). We apply the ewedf, the wedf and the Rosenblatt transformation-based goodness-of-fit tests to each simulated trajectory and then calculate the percentage of rejected hypotheses \(H_0\) at the 5 % significance level. We assume that the model parameters are known. The computation of \(E(X_{t,1}|\mathbf{{x}}_t)\) in the wedf approach requires backward recursion until the previous observation from the mean-reverting regime is found, see (21). However, as the number of observations is limited, the condition \(P(R_t=1|\mathbf{{x}}_t)=1\) might not be fulfilled at all. The estimation scheme requires some approximation or an additional assumption. Here, we assume that for each simulated trajectory the first observation comes from the mean-reverting regime.

In the ewedf approach the tested hypothesis says that the state process is known (and coincides with the proposed classification of the observations to the regimes). As a consequence, once the regimes are identified, it is equivalent to the standard edf approach. To test how it performs for an MRS model with a latent state process, we apply it to the simulated trajectories (we first identify the regimes, then test whether the sample is generated from the assumed MRS model).

The results reported in Table 2 indicate that only the wedf and Rosenblatt transformation-based tests yield correct percentages of rejected hypotheses. The values obtained for the ewedf-based test are far from the expected level of 5 %. The ewedf approach is more restrictive, probably due to the less flexible classification of the regimes in which the probabilities \(P(R_t=i|\mathbf{{x}}_t)\) can only take values of 0 or 1. This simple example clearly shows that in case of MRS models the ewedf approach is less reliable.

Table 2 Percentage of rejected hypotheses \(H_0\) at the \(5\,\%\) significance level calculated from 10,000 simulated trajectories of 2000 observations each of the four 2-regime MRS models defined in Table 1

Finally, to measure the quality of regime classification, we use the regime classification measure (RCM) of Ang and Bekaert (2002), see the last column in Table 2. Since the true regime is a Bernoulli random variable, the RCM statistic is essentially a sample estimate of its variance. The RCM statistic is rescaled so that a value of 0 means perfect regime classification and a value of 100 implies that no information about the regimes is revealed. In our case, the regime classification is very good or good for specification I models used for modeling electricity prices in Sect. 5 (Sim #1 and Sim #3) and good or moderately good for specification II models (Sim #2 and Sim #4). The lower RCM values for type I models also imply that in these models the regimes are better separated than in the respective type II models. Finally, in models whose parameters are estimated from the NEPOOL log-prices (Sim #3 and Sim #4) the regimes are less separated than in the other two models.

4.2 Unknown model parameters

The simulation results presented so far were obtained with the assumption that model parameters are known. Unfortunately, in typical applications the parameters have to be estimated before the testing procedure is performed. This may result in overestimated \(p\) values. To cope with this problem, we use Monte Carlo simulations (for details, see e.g., Čižek et al. 2011, Ross 2002). For each of the 500 trajectories (of 2,000 observations) simulated from each of the four 2-regime MRS models defined in Table 1 the procedure is as follows:

  1. 1.

    Estimate the parameter vector (\(\hat{\theta }\)) and calculate the test statistic (\(d_n\)) according to formula (9).

  2. 2.

    For ‘K-S table’-type (‘K-S tab.’) estimation calculate the \(p\) value using K-S test tables and assuming that the sample comes from a model with parameter vector \(\hat{\theta }\).

  3. 3.

    For ‘MC simulation’-type (‘MC sim.’) estimation:

    1. (a)

      simulate \(N=500\) trajectories with parameter vector \(\hat{\theta }\) (these trajectories will be used to compute the estimate of the \(p\) value),

    2. (b)

      for each trajectory \(i=1,\ldots ,N\) estimate the parameter vector (\(\hat{\theta }_i\)) and calculate the test statistic (\(d_n^i\)),

    3. (c)

      calculate the \(p\) value as the proportion of simulated trajectories with the test statistic values higher or equal to \(d_n\), i.e., \(\frac{1}{N}\#\{i: d_n^i\ge d_n\}\).

Note, that in contrast to the simulation study of Sect. 4.1, where we used 10,000 simulated trajectories to obtain the percentages given in Table 2, now we use \(500\) simulated trajectories for the ‘K-S table’-based percentages and \(500\times 500=250{,}000\) simulated trajectories for the ‘MC simulation’-based percentages in Table 3. Despite the substantially increased computational burden the accuracy of the percentages of rejected hypotheses has decreased as now it is based on only 500 simulations.

Table 3 Percentage of rejected hypotheses \(H_0\) at the \(5\,\%\) significance level calculated from 500 simulated trajectories of 2,000 observations each of the models defined in Table 1 with parameters estimated from each sample

Looking at the test results based on the K-S test tables (‘K-S tab.’ in Table 3), for the ewedf approach the rejection percentages deviate significantly from the \(5\,\%\) level. On the other hand, for the wedf and the Rosenblatt transformation-based approaches the \(p\) values are overestimated, what results in rejection percentages much lower than the 5 % significance level. Observe that for most of the models none of the tests were rejected. Therefore, if \(p\) values obtained with the wedf or the Rosenblatt transformation-based approaches are close to the significance level, the test may fail to reject a false \(H_0\) hypothesis. This is not the case for the testing approach utilizing Monte Carlo simulations (‘MC sim.’ in Table 3) as the obtained rejection percentages are close to the 5 % significance level. This example clearly shows that the wedf and the Rosenblatt transformation test based on the K-S test tables can only be used if it returns a \(p\) value below the significance level (i.e., if it rejects the \(H_0\) hypothesis) or well above the significance level. However, if the obtained \(p\) value is close to the significance level, Monte Carlo simulations should be performed.

4.3 Power of the tests

In this section we investigate the power of the tests. To this end, for a given MRS model we simulate 500 trajectories of 100, 500 or 2,000 observations each. Next, for each trajectory we calibrate an MRS model with an alternative specification of the regimes and perform goodness-of-fit tests to verify if the simulated trajectory can be driven by the alternative model. Finally, we calculate the percentages of the rejected hypotheses. The tests utilize all three approaches—ewedf, wedf and Rosenblatt transformation-based—and both methods of calculating \(p\) values—K-S test tables and Monte Carlo simulations. We first consider the following three cases with arbitrarily chosen parameters:

  • AR-ARG1 vs. AR-AR The trajectories are simulated from an MRS model defined as:

    $$\begin{aligned} X_{t}=\alpha _{R_t}+ (1-\beta _{R_t})X_{t-1} +\sigma _{R_t}X_{t-1}^{\gamma _i} \epsilon _t, \quad R_t\in \{1,2\}, \end{aligned}$$

    where \(\alpha _1=1\), \(\beta _1=0.8\), \(\sigma _1^2=1\), \(\gamma _1=0\), \(\alpha _2=3\), \(\beta _2=0.4\), \(\sigma _2^2=0.05\), \(\gamma _2=1\), \(p_{11}=0.6\) and \(p_{22}=0.5\). The model is denoted by AR-ARG1, which indicates that the first regime is driven by an AR(1) process and the second regime by a heteroskedastic autoregressive process with \(\gamma =1\) (i.e., ARG1). We test whether the simulated trajectories can be described by the model defined in Eq. (6), i.e., following specification II, and denoted here by AR-AR.

  • AR-E vs AR-LN The trajectories are simulated from an MRS model following specification I, see (4) and (5), with an exponential distribution in the second regime, i.e., \(F^2 \sim \text{ Exp}(\lambda )\). The model is denoted here by AR-E and its parameters are given by: \(\alpha =10\), \(\beta =0.6\), \(\sigma ^2=10\), \(\lambda =30\), \(p_{11}=0.6\) and \(p_{22}=0.5\). We test whether the simulated trajectories can be driven by a model following specification I with a log-normal distribution in the second regime (i.e., AR-LN).

  • CIR-LN vs AR-G The trajectories are simulated from an MRS model defined as:

    $$\begin{aligned} X_{t,1}&= \alpha _1+(1-\beta _1)X_{t-1,1} +\sigma _1 \sqrt{X_{t-1,1}} \epsilon _t,\\ X_{t,2}&\sim&LN\left(\alpha _2,\sigma _2^2\right), \end{aligned}$$

    where \(\alpha _1=1\), \(\beta _1=0.8\), \(\sigma _1^2=0.5\), \(\alpha _2=2\), \(\sigma _2^2=0.5\), \(p_{11}=0.6\) and \(p_{22}=0.5\), i.e., the first regime is a discrete time version of the square root process, also known as the CIR process (Cox et al. 1985), and the second is a log-normal random variable. Hence, the name CIR-LN. We test whether the simulated trajectories can be driven by a model following specification I with a Gaussian distribution in the second regime.

 

The test results are summarized in Table 4. The values obtained for the individual regimes are also provided, however, as the simulated and estimated models differ, these rejection rates are highly dependent on the classification of observations to the regimes during estimation. Therefore, in the discussion that follows we focus on the test results for the whole models. Comparing the power of the Monte Carlo approach with the one using K-S test tables, we observe that in most cases the latter method yields lower (or worse) rejection percentages. This is in compliance with the results obtained in Sect. 4.2. The only significant deviations from this pattern can be observed for the AR-E vs. AR-LN test scenario, when the true model (i.e., AR-E)—and hence the simulated trajectories—exhibit a low degree of regime separation. The latter is manifested by relatively high RCM values of 38.23–41.69 (averaged over 500 simulated trajectories for each of the three sample sizes), compared to RCM values of 8.07–8.28 and 13.68–15.81 for the first and third scenarios, respectively. Note, that the reported RCM values were computed for the true models fitted to the 500 simulated trajectories from these models.

Table 4 Percentages of rejected hypotheses \(H_0\) at the \(5\%\) significance level for the alternative models with parameters estimated for each of the 500 simulated trajectories of \(T=100\), \(500\) or \(2000\) observations

Looking at the MC simulation results obtained for the largest samples of \(T=2{,}000\) observations, we can see that in almost all cases the false hypothesis was rejected. The lowest rejection rate for the ewedf approach was 0.7580, for the wedf approach—0.9620 and for the Rosenblatt transformation—0.9900. All three lowest rates were obtained for the challenging AR-E vs. AR-LN test scenario. However, if the samples are smaller, the power of the tests apparently decreases. The sample size of \(T=100\) observations does not seem to be enough, especially if the degree of regime separation is low, like in the AR-E vs. AR-LN scenario. This is not the case if the definitions of both regimes are significantly different, as for the CIR-LN vs. AR-G scenario, for which the power is satisfactory even if \(T=100\). Comparing the ewedf and wedf approaches, we can observe that the latter yields higher on average rejection rates. A comparison of the wedf and the Rosenblatt transformation-based approach does not yield such a clear picture. For the AR-ARG1 vs. AR-AR scenario the power of the Rosenblatt transformation-based approach is higher than of the wedf approach, for the AR-E vs. AR-LN scenario it is only slightly higher, while for the CIR-LN vs. AR-G scenario it is the wedf approach which produces higher rejection rates (for small samples).

Next, in Table 5 we consider three analogous scenarios, but this time the parameters of the simulated trajectories are estimated from the NEPOOL log-prices studied in Sect. 5. Like in Table 4, the ‘K-S tab.’ method generally yields lower (or worse) rejection percentages than the Monte Carlo approach. Comparing all three approaches (ewedf, wedf, Rosenblatt) we find that this time they have roughly the same ‘whole model’ rejection rates, but—except for the AR-E vs. AR-LN scenario—the power of the tests has decreased. To a large extent this can be explained by regime classification. Now the AR-E vs. AR-LN scenario exhibits much lower RCM values (0.59–0.92; again averaged over 500 simulated trajectories for each of the three sample sizes) than the first (23.76–30.91) and the third (7.08–9.31) scenarios. However, despite almost twice lower RCM values for the third scenario (NEPOOL estimated vs. arbitrary parameters) the rejection percentages for this scenario have substantially decreased. Now they also do not increase with sample size, which may indicate problems with reaching the global maximum within the estimation procedure for small samples. This is not unexpected when conducting Monte Carlo studies of Markov regime-switching models as the likelihood surface can be highly irregular and contain several local maxima (Smith 2008).

Table 5 Percentages of rejected hypotheses \(H_0\) at the \(5\,\%\) significance level for the alternative models with parameters estimated for each of the 500 simulated trajectories of \(T=100\), \(500\) or \(2{,}000\) observations

5 Application to electricity spot prices

Now, we are ready to apply the new goodness-of-fit technique to electricity spot price models. We analyze the mean daily (baseload) day-ahead spot prices from the New England Power Pool SEMASS area (NEPOOL; US). The sample totals 1,827 daily observations (or 261 full weeks) and covers the 5-year period January 2, 2006–January 2, 2011, see Fig. 1. It is well known that electricity spot prices exhibit several characteristic features (Benth et al. 2008; Eydeland and Wolyniec 2012; Huisman 2009; Weron 2006), which have to be taken into account when modeling such processes. These include seasonality on the annual, weekly and daily level, mean reversion and price spikes. To cope with the seasonality we use the standard time series decomposition approach and let the electricity spot price \(P_t\) be represented by a sum of two independent parts: a predictable (seasonal) component \(f_t\) and a stochastic component \(X_t\), i.e., \(P_t = f_t + X_t\). Further, to address the mean-reverting and spiky behavior we let the log-prices, i.e., \(Y_t=\log (X_t)\), be driven by:

  • a 2-regime MRS model with mean-reverting, see (4), base regime (\(R_t=1\)) and i.i.d. shifted log-normally distributed spikes (\(R_t=2\)) or

  • a 3-regime MRS model with mean-reverting, see (4), base regime (\(R_t=1\)), i.i.d. shifted log-normally distributed spikes (\(R_t=2\)) and i.i.d. drops (\(R_t=3\)) distributed according to the inverted shifted log-normal law.

Note that we use here the independent regime specification (i.e., type I), since in wholesale power markets the price spikes (‘Second regime’) or price drops (i.e., negative price spikes; ‘Third regime’) are driven by unexpected but transient changes of market conditions. A generation outage or a severe heat wave typically do not last longer than a few hours or a few days; when it is over the prices move back to the normal level (‘Base regime’), usually irrespective of the magnitude of the extreme prices a few hours or days earlier (De Jong 2006; Janczura et al. 2012; Mari 2008).

Furthermore recall, that \(X\) follows the shifted log-normal law or inverted shifted log-normal law if \(\log (X-q)\), respectively \(\log (q-X)\), has a Gaussian distribution. The cutoff level \(q\) can be different for the spike and the drop regime, however, here—motivated by the results of Janczura and Weron (2013)—we set it to the first and the third quartile of the dataset for drops and spikes, respectively. Using shifted log-normal distributions allows to increase the degree of separation as measured by RCM and, hence, increase the power of the tests (see Sect. 4). For instance, the 2-regime model now yields an RCM of 4.79, compared to 12.81 for Sim #3 in Table 2. It also leads to more fundamentally justified models since electricity price spikes are generally connected with scheduling units with higher marginal costs (like gas turbines, see e.g., Eydeland and Wolyniec 2012; Weron 2006).

Finally note, that such simple one-factor models may not be complex enough to address all features of electricity prices. In particular, the electricity forward prices implied by these spot price models exhibit the so-called Samuelson effect (i.e., a decrease in volatility with increasing time to maturity; for the considered models the volatility scales as \(e^{-\beta (T-t)}\)), but the rate of decrease is completely determined by the speed of mean-reversion \(\beta \) (Janczura and Weron 2012). However, the rate of decrease should be large only for maturities up to a year (Kiesel et al. 2009). Perhaps, incorporating another stochastic factor would lead to a more realistic forward price curve.

Following Weron (2009) the deseasonalization is conducted in three steps; for a thorough study of modeling seasonal components in electricity spot prices we refer to Janczura et al. (2012). First, the long-term seasonal component (LTSC) \(T_t\) is estimated from daily spot prices \(P_t\) using a wavelet filter-smoother of order 6. A single non-parametric LTSC is used here to represent the long-term non-periodic fuel price levels, the changing climate/consumption conditions throughout the years and strategic bidding practices. As shown by Janczura and Weron (2010), the wavelet-estimated LTSC pretty well reflects the ‘average’ fuel price level, understood as a combination of natural gas, crude oil and coal prices; see also Eydeland and Wolyniec (2012) and Karakatsani and Bunn (2010) for a treatment of fundamental and behavioral drivers of electricity prices. On the other hand, as discussed recently in Janczura and Weron (2012), the use of the wavelet-based LTSC is somewhat controversial. Predicting it beyond the next few weeks is a difficult task, because individual wavelet functions are quite localized in time or (more generally) in space. Preliminary research suggests, however, that despite this feature the wavelet-based LTSC can be extrapolated into the future yielding a better on-average prediction of the level of future spot prices than an extrapolation of a sinusoidal LTSC (Nowotarski et al. 2011).

The price series without the LTSC is obtained by subtracting the \(T_t\) approximation from \(P_t\). Next, the weekly periodicity \(s_t\) is removed by subtracting the ‘average week’ calculated as the mean of prices corresponding to each day of the week (the US national holidays are treated as the 8-day of the week). Finally, the deseasonalized prices, i.e., \(X_t = P_t - T_t - s_t\), are shifted so that the minimum of the new process \(X_t\) is the same as the minimum of \(P_t\). The resulting deseasonalized time series can be seen in Figs. 23. The estimated model parameters are presented in Table 6.

Fig. 2
figure 2

Estimation results for the 2-regime MRS model with a mean-reverting base regime and independent log-normally distributed ‘spikes’ fitted to NEPOOL log-prices. Observations with \(P(R_t=2|\mathbf x _T)>0.5\), i.e., the ‘spikes’, are denoted by dots. The lower panel displays the probability \(P(R_t=2|\mathbf x _T)\) of being in the ‘spike’ regime

Fig. 3
figure 3

Estimation results for the 3-regime MRS model with a mean-reverting base regime and independent log-normally distributed ‘spikes’ and ‘drops’ fitted to NEPOOL log-prices. Observations with \(P(R_t=2|\mathbf x _T)>0.5\) or \(P(R_t=3|\mathbf x _T)>0.5\), i.e., the ‘spikes’ or ‘drops’, are denoted by dots or ’x’ in the upper panel. The lower panels display the probability \(P(R_t=2|\mathbf x _T)\) or \(P(R_t=2|\mathbf x _T)\) of being in the ‘spike’ or ‘drop’ regime, respectively

Table 6 Parameters of the 2- and 3-regime MRS models with mean-reverting base regime and independent spikes and drops driven by shifted log-normal laws fitted to the deseasonalized NEPOOL log-prices

For both analyzed MRS models, tests based on the ewedf, the wedf and the Rosenblatt transformation are performed. The \(p\) values in Table 7 are reported both for the standard approach utilizing the K-S test tables (which generally leads to overestimated \(p\) values) and the much slower but more accurate Monte Carlo setup. The testing procedure in the Monte Carlo case is analogous to the one used in the simulation study, see Sect. 4.2 for a detailed description. Again, in order to verify the ‘whole model’ goodness-of-fit, we transform the spike and drop regime observations so that both samples are \(N(0,1)\)-distributed.

For the 2-regime model the \(p\) values obtained from the K-S test tables indicate that the model cannot be rejected at the 5 % significance level. However, the base regime and the model \(p\) values are still quite low—especially for the wedf approach—so the conclusions of the test should be verified with the Monte Carlo simulations. Indeed, for the Monte Carlo based test only the spike regime yields a satisfactory fit, as the \(p\) value is well above the 5 % significance level. The base regime, as well as the whole model distribution, can be rejected at any reasonable level for the wedf and at the 5 % level for the Rosenblatt transformation. Apparently, the base regime process cannot model the sudden drops in the NEPOOL log-prices. However, if a third regime (modeling price drops) is introduced, the MRS model yields a satisfactory fit. In the 3-regime case none of the tests can be rejected at the 5 % significance level. Regime classification for the 2- and 3-regime models can be observed in Figs. 2 and 3, respectively.

Table 7 The \(p\) values of the K-S test based on the ewedf, the wedf and the Rosenblatt transformation (Rtrans) for the 2- and 3-regime MRS models of the deseasonalized NEPOOL log-prices

6 Conclusions

While most of the electricity spot price models proposed in the literature are elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. As the empirical study of Sect. 5 has shown, even reasonably looking and popular models should be carefully tested before they are put to use in trading or risk management departments. The goodness-of-fit wedf-based test introduced in Sect. 3.2.2 provides an efficient tool for accepting or rejecting a given Markov regime-switching (MRS) model for a particular data set. While its performance (including power; see Sect. 4.3) is similar to that of the Rosenblatt transformation-based approach, it provides an edge over the latter by yielding \(p\) values for the individual regimes. For instance, this allows to observe that in the 3-regime model the worst fit is obtained for the base regime. Perhaps the simple AR(1) structure is not enough to model the complex dynamics of electricity spot prices in the relatively calm, non-spiky periods.

However, in this paper we have not restricted ourselves to MRS models but pursued a more general goal. Namely, we have proposed a goodness-of-fit testing scheme for the marginal distribution of regime-switching models, including variants with an observable and with a latent state process. For both specifications we have described the testing procedure. The models with a latent state process (i.e., MRS models) required the introduction of the concept of the weighted empirical distribution function (wedf) and a generalization of the Kolmogorov–Smirnov test to yield an efficient testing tool.

We have focused on two commonly used specifications of regime-switching models in the energy economics literature—one with dependent autoregressive states and a second one with independent autoregressive and i.i.d. regimes. Nonetheless, the proposed approach can be easily applied to other specifications of regime-switching models (for instance, to 3-regime models with heteroscedastic base regime dynamics; see Janczura and Weron 2010). Very likely it can be also extended to other goodness-of-fit edf-type tests, like the Anderson–Darling. As the latter puts more weight to the observations in the tails of the distribution than the Kolmogorov–Smirnov test, it might be more discriminatory and provide a better testing tool for extremely spiky data. Future work will be conducted in this direction.

Finally note, that a good in-sample fit does not necessarily imply a good forecast behavior. Although Kosater and Mosler (2006) found for German electricity price data that for long run point forecasts (30–80 days ahead) an MRS model with regimes driven by two AR(1) processes was slightly more accurate than a simple AR(1) model, for shorter time horizons both model classes performed alike. It remains an open question how do the MRS models fitted to NEPOOL log-prices in Sect. 5 perform in terms of forecasting. The adequacy of MRS models for forecasting in general has been questioned by Bessec and Bouabdallah (2005). However, as Weron and Misiorek (2008) have shown, regime-switching models may behave better than their linear competitors in volatile periods. They might also have an edge in density forecasts, but this has to be verified yet.