Skip to main content

Advertisement

Log in

Potential predictability and forecast skill in ensemble climate forecast: a skill-persistence rule

  • Published:
Climate Dynamics Aims and scope Submit manuscript

Abstract

This study investigates the factors relationship between the forecast skills for the real world (actual skill) and perfect model (perfect skill) in ensemble climate model forecast with a series of fully coupled general circulation model forecast experiments. It is found that the actual skill for sea surface temperature (SST) in seasonal forecast is substantially higher than the perfect skill on a large part of the tropical oceans, especially the tropical Indian Ocean and the central-eastern Pacific Ocean. The higher actual skill is found to be related to the higher observational SST persistence, suggesting a skill-persistence rule: a higher SST persistence in the real world than in the model could overwhelm the model bias to produce a higher forecast skill for the real world than for the perfect model. The relation between forecast skill and persistence is further proved using a first-order autoregressive model (AR1) analytically for theoretical solutions and numerically for analogue experiments. The AR1 model study shows that the skill-persistence rule is strictly valid in the case of infinite ensemble size, but could be distorted by sampling errors and non-AR1 processes. This study suggests that the so called “perfect skill” is model dependent and cannot serve as an accurate estimate of the true upper limit of real world prediction skill, unless the model can capture at least the persistence property of the observation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Anderson JL (2001) An ensemble adjustment Kalman filter for data assimilation. Mon Weather Rev 129:2884–2903

    Article  Google Scholar 

  • Anderson JL (2003) A local least squares framework for ensemble filtering. Mon Weather Rev 131:634–642

    Article  Google Scholar 

  • Becker EJ, Dool HVD, Peña M (2013) Short-term climate extremes: prediction skill and predictability. J Clim 26:512–531

    Article  Google Scholar 

  • Becker E, Dool HVD, Zhang Q (2014) Predictability and forecast skill in NMME. J Clim 27:5891–5906

    Article  Google Scholar 

  • Boer G, Kharin VV, Merryfield WJ (2013) Decadal predictability and forecast skill. Clim Dyn 41:1817–1833

    Article  Google Scholar 

  • Chen M, Wang W, Kumar A (2010) Prediction of monthly-mean temperature: the roles of atmospheric and land initial conditions and sea surface temperature. J Clim 23:717–725

    Article  Google Scholar 

  • Dunstone NJ, Smith DM (2010) Impact of atmosphere and sub-surface ocean data on decadal climate prediction. Geophys Res Lett 37:L02709. https://doi.org/10.1029/2009GL041609

    Article  Google Scholar 

  • Gaspari G, Cohn SE (1999) Construction of correlation functions in two and three dimensions. Q J R Meteorol Soc 125:723–757

    Article  Google Scholar 

  • Griffies S, Bryan K (1997) A predictability study of simulated North Atlantic multidecadal variability. Clim Dyn 13:459–487

    Article  Google Scholar 

  • Hasselmann K (1976) Stochastic climate models. Part I: theory. Tellus 28:473–485

    Article  Google Scholar 

  • Holland MM, Blanchard-Wrigglesworth E, Kay J, Vavrus S (2013) Initial-value predictability of Antarctic sea ice in the Community Climate System Model 3. Geophys Res Lett 40:2121–2124. https://doi.org/10.1002/grl.50410.f

    Article  Google Scholar 

  • Jacob R (1997) Low frequency variability in a simulated atmosphere ocean system. Ph.D. dissertation, University of Wisconsin–Madison, p 155

  • Kumar A (2009) Finite samples and uncertainty estimates for skill measures for seasonal prediction. Mon Weather Rev 137:2622–2631

    Article  Google Scholar 

  • Kumar A, Peng P, Chen M (2014) Is There a relationship between potential and actual skill? Mon Wea Rev 142:2220–2227

    Article  Google Scholar 

  • Liu Z, Kutzbach J, Wu L (2000) Modeling climate shift of El Niño variability in the Holocene. Geophys Res Lett 27:2265–2268

    Article  Google Scholar 

  • Liu Z, Otto-Bliesner B, Kutzbach J, Li L, Shields C (2003) Coupled climate simulations of the evolution of global monsoons in the Holocene. J Clim 16:2472–2490

    Article  Google Scholar 

  • Liu Z, Liu Y, Wu L, Jacob R (2007) Seasonal and long-term atmospheric responses to reemerging North Pacific Ocean variability: a combined dynamical and statistical assessment. J Clim 20:955–980

    Article  Google Scholar 

  • Liu Y, Liu Z, Zhang S, Rong X, Jacob R, Wu S, Lu F (2014) Ensemble-based parameter estimation in a coupled GCMusing the adaptive spatial average method. J Clim 27:4002–4014

    Article  Google Scholar 

  • Lu F, Liu Z, Liu Y, Zhang S, Jacob R (2016) Understanding the control of extratropical atmospheric variability on ENSO using a coupled data assimilation approach. Clim Dyn. https://doi.org/10.1007/s00382-016-3256-7

    Article  Google Scholar 

  • Mehta VM, Suarez MJ, Manganello JV, Delworth TL (2000) Oceanic influence on the North Atlantic Oscillation and associated Northern Hemisphere climate variations: 1959–1993. Geophys Res Lett 27:121–124

    Article  Google Scholar 

  • Meinshausen M, Smith S et al (2011) The RCP GHG concentrations and their extension from 1765 to 2300. Clim Change. https://doi.org/10.1007/s10584-011-0156-z

    Article  Google Scholar 

  • Newman M, Compo GP, Alexander MA (2003) ENSO-forced variability of the Pacific decadal oscillation. J Clim 16:3853–3857

    Article  Google Scholar 

  • Pegion K, Sardeshmukh PD (2011) Prospects for improving subseasonal predictions. Mon Weather Rev 139(11):3648–3666

    Article  Google Scholar 

  • Penland C, Magorian T (1993) Prediction of Nino-3 sea surface temperature using linear inverse modeling. J Clim 6:1067–1076

    Article  Google Scholar 

  • Pohlmann H, Kröger J, Greatbatch RJ, Müller WA (2016) Initialization shock in decadal hindcasts due to errors in wind stress over the tropical Pacific. Clim Dyn 49:2685–2693

    Article  Google Scholar 

  • Rayner NA, Parker DE, Horton EB, Folland CK, Alexander LV, Rowell DP, Kent EC, Kaplan A (2003) Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J Geophys Res 108(D14):1063–1082. https://doi.org/10.1029/2002JD002670

    Article  Google Scholar 

  • Sévellec F, Fedorov AV (2013) Model bias reduction and the limits of oceanic decadal predictability: importance of the deep ocean. J Clim 26:3688–3707

    Article  Google Scholar 

  • Taylor KE, Stouffer RJ, Meehl GA (2012) An overview of CMIP5 and the experiment design. Bull Am Meteorol Soc 93:485–498

    Article  Google Scholar 

  • Teng H, Branstator G, Meehl GA (2011) Predictability of the Atlantic overturning circulation and associated surface patterns in two CCSM3 climate change ensemble experiments. J Clim 24:6054–6076

    Article  Google Scholar 

  • Tobis M, Schafer C, Foster I, Jacob R, Anderson J (1997) FOAM: expanding the horizons of climate modeling. Supercomputing 1997 conference, Supercomputing, ACM/IEEE 1997 Conference, pp 27–27

  • Wu L, Liu Z, Gallimore R, Jacob R, Lee D, Zhong Y (2003) Pacific decadal variability: the tropical mode and the North Pacific mode. J Clim 16:1101–1120

    Article  Google Scholar 

  • Younas W, Tang Y (2013) PNA predictability at various time scales. J Clim 26:9090–9114. https://doi.org/10.1175/JCLI-D-12-00609.1

    Article  Google Scholar 

  • Zhang S, Harrison MJ, Rosati A, Wittenberg A (2007) System design and evaluation of coupled ensemble data assimilation for global oceanic climate studies. Mon Weather Rev 135:3541–3564

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Basic Research Program of China (2017YFA0603801), the National Key R&D Program of China (2016YFE0102400), the Special Fund for Public Welfare Industry (GYHY201506012), the Basic Research Fund of CAMS (2015Z002) and US NSF Climate Dynamics 1656907.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xinyao Rong or Zhengyu Liu.

Appendix: derivation of perfect skill and actual skill in AR1 model

Appendix: derivation of perfect skill and actual skill in AR1 model

In this section, we will derive the perfect and actual skill in the case of infinite forecasts number. We first derive the forecast ACC and RMSE for RW forecast in the presence of initial error and sampling error. Recalling Eqs. (6) and (8), assume the forecast starts from step n of the truth of real world, i.e., \(X_{n}^{{rw}}\), and now we want to forecast the state of step n + k. Then, the truth at step n + k is \(X_{{n+k}}^{{rw}}\), and the ensemble mean of the forecast is \(\overline {{X_{{n+k}}^{{f,~rw}}}}\). Assuming the ensemble mean of initial condition error is \(\varepsilon _{{i,n}}^{{rw}}\), then we have:

$$X_{{n+k}}^{{rw}}={r^k}X_{n}^{{rw}}+\mathop \sum \limits_{{j=0}}^{{k - 1}} ({r^{k - j - 1}}\varepsilon _{{n+j}}^{{rw}}),$$
(20)
$$\overline {{X_{{n+k}}^{{f,~rw}}}} ={p^k}X_{n}^{{rw}}+\frac{1}{m}\mathop \sum \limits_{{j=0}}^{k-1} \left( {{p^{k - j-1}}\mathop \sum \limits_{{l=1}}^{{m}} \varepsilon _{{l,n+j}}^{{pm}}} \right)+{p^k}\varepsilon _{{i,n}}^{{rw}},$$
(21)

where m is the ensemble size, \(\varepsilon _{{l,n+j}}^{{pm}}~\) is the noise of ensemble member l at step n + j. Then the variance of \(\overline {{X_{{n+k}}^{{f,~rw}}}}\) is:

$$\begin{aligned} \left\langle {\overline {{X_{{n+k}}^{{f,~rw}}}} ,\overline {{X_{{n+k}}^{{f,~rw}}}} } \right\rangle & =\left\langle {{p^k}X_{n}^{{rw}}+\frac{1}{m}\mathop \sum \limits_{{j=0}}^{k-1} \left( {{p^{k - j-1}}\mathop \sum \limits_{{l=1}}^{{m}} \varepsilon _{{l,n+j}}^{{pm}}} \right)+{p^k}\varepsilon _{{i,n}}^{{rw}},~{p^k}X_{n}^{{rw}}+\frac{1}{m}\mathop \sum \limits_{{j=0}}^{k-1} \left( {{p^{k - j - 1}}\mathop \sum \limits_{{l=1}}^{{m}} \varepsilon _{{l,n+j}}^{{pm}}} \right)+{p^k}\varepsilon _{{i,n}}^{{rw}}} \right\rangle \\ & ={p^{2k}}\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle +\left\langle {\frac{1}{m}\mathop \sum \limits_{{j=0}}^{k-1} \left( {{p^{k - j-1}}\mathop \sum \limits_{{l=1}}^{{m}} \varepsilon _{{l,n+j}}^{{pm}}} \right),\frac{1}{m}\mathop \sum \limits_{{j=0}}^{k-1} \left( {{p^{k - j-1}}\mathop \sum \limits_{{l=1}}^{{m}} \varepsilon _{{l,n+j}}^{{pm}}} \right)} \right\rangle +{p^{2k}} \\ & \quad \times \left\langle {\varepsilon _{{i,n}}^{{rw}},\varepsilon _{{i,n}}^{{rw}}} \right\rangle ={p^{2k}}\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle +\frac{1}{{{m^2}}}\mathop \sum \limits_{{j=0}}^{k-1} \left( {{p^{2\left( {k - j-1} \right)}}\mathop \sum \limits_{{l=1}}^{{m}} \left\langle {\varepsilon _{{l,n+j}}^{{pm}},\varepsilon _{{l,n+j}}^{{pm}}} \right\rangle } \right)+{p^{2k}}\left\langle {\varepsilon _{{i,n}}^{{rw}},\varepsilon _{{i,n}}^{{rw}}} \right\rangle \\ & ={p^{2k}}\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle+ \frac{1}{m}\mathop \sum \limits_{{j=0}}^{k-1} {p^{2\left( {k - j-1} \right)}}\left\langle {{\varepsilon ^{pm}},{\varepsilon ^{pm}}} \right\rangle +{p^{2k}}\left\langle {\varepsilon _{{i,n}}^{{rw}},\varepsilon _{{i,n}}^{{rw}}} \right\rangle ={p^{2k}}\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle \\ & \quad +\frac{1}{m}\left( {1 - {p^2}} \right)\left( {\mathop \sum \limits_{{j=0}}^{k-1} {p^{2\left( {k - j-1} \right)}}} \right)\left\langle {X_{n}^{{pm}},X_{n}^{{pm}}} \right\rangle +{p^{2k}}\left\langle {\varepsilon _{{i,n}}^{{rw}},\varepsilon _{{i,n}}^{{rw}}} \right\rangle =\left[ {{p^{2k}}\left( {1+{c_{rw}}^{2}} \right)+\frac{d}{m}\left( {1 - {p^{2k}}} \right)} \right]\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle , \\ \end{aligned}$$
(22)

where ‘<>’ denotes the covariance upon infinite size of forecasts, and \(d=\langle X_{n}^{{pm}},X_{n}^{{pm}} \rangle/ \langle X_{n}^{{rw}},X_{n}^{{rw}} \rangle\) is the ratio between the variance of forecast model and real world X, \({c_{rw}}^{2}=\langle \varepsilon _{{i,n}}^{{rw}},~\varepsilon _{{i,n}}^{{rw}} \rangle/\langle X_{n}^{{rw}},~X_{n}^{{rw}} \rangle\) is the ratio between the variance of initial error and real world X. Note that we assume that the state \(X_{n}^{{rw}}\) is independent of the noise and initial error, also the noise and initial error are uncorrelated.

Then, the ACC of RW forecast at forecast step k is:

$$\begin{aligned} AC{C_{rw,k}} & =\frac{{\left\langle {\overline {{X_{{n+k}}^{{f,~rw}}}} ,X_{{n+k}}^{{rw}}} \right\rangle }}{{\sqrt {\left\langle {\overline {{X_{{n+k}}^{{f,~rw}}}} ,\overline {{X_{{n+k}}^{{f,~rw}}}} } \right\rangle \cdot \left\langle {X_{{n+k}}^{{rw}},~X_{{n+k}}^{{rw}}} \right\rangle } }}=\frac{{\left\langle {{p^k}X_{n}^{{rw}}+\frac{1}{m}\mathop \sum \nolimits_{{j=0}}^{k-1} \left( {{p^{k - j-1}}\mathop \sum \nolimits_{{l=1}}^{{m}} \varepsilon _{{l,n+j}}^{{pm}}} \right)+{p^k}\varepsilon _{{i,n}}^{{rw}},{r^k}X_{n}^{{rw}}+\mathop \sum \nolimits_{{j=0}}^{{k - 1}} \left( {{r^{k - j - 1}}\varepsilon _{{n+j}}^{{rw}}} \right)} \right\rangle }}{{\sqrt {\left\langle {\overline {{X_{{n+k}}^{{f,~rw}}}} ,\overline {{X_{{n+k}}^{{f,~rw}}}} } \right\rangle \cdot \left\langle {X_{{n+k}}^{{rw}},~X_{{n+k}}^{{rw}}} \right\rangle } }} \\ & =\frac{{{p^k}{r^k}\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle }}{{\sqrt {\left\langle {\overline {{X_{{n+k}}^{{f,~rw}}}} ,\overline {{X_{{n+k}}^{{f,~rw}}}} } \right\rangle \cdot \left\langle {X_{{n+k}}^{{rw}},~X_{{n+k}}^{{rw}}} \right\rangle } }}=\frac{{{r^k}}}{{\sqrt {1+{c_{rw}}^{2}+\frac{d}{m}\left( {\frac{1}{{{p^{2k}}}} - 1} \right)} }}, \\ \end{aligned}$$
(23)

and the RMSE at forecast step k is:

$$RMS{E_{rw,k}}^{2}=\frac{{\left\langle {\overline {{X_{{n+k}}^{{f,~rw}}}} - X_{{n+k}}^{{rw}},\overline {{X_{{n+k}}^{{f,~rw}}}} - X_{{n+k}}^{{rw}}} \right\rangle }}{{\left\langle {X_{{n+k}}^{{rw}},~X_{{n+k}}^{{rw}}} \right\rangle }}={\left( {{p^k} - {r^k}} \right)^2}+\left( {1 - {r^{2k}}} \right)+\frac{d}{m}\left( {1 - {p^{2k}}} \right)+{p^{2k}}{c_{rw}}^{2}.$$
(24)

Similarly, the ACC and RMSE of PM forecast at step k:

$$AC{C_{pm,k}}=\frac{{\left\langle {\overline {{X_{{n+k}}^{{f,~pm}}}} ,X_{{n+k}}^{{pm}}} \right\rangle }}{{\sqrt {\left\langle {\overline {{X_{{n+k}}^{{f,~pm}}}} ,\overline {{X_{{n+k}}^{{f,~pm}}}} } \right\rangle \cdot \left\langle {X_{{n+k}}^{{pm}},~X_{{n+k}}^{{pm}}} \right\rangle } }}=\frac{{{p^k}}}{{\sqrt {1+{c_{pm}}^{2}+\frac{1}{m}\left( {\frac{1}{{{p^{2k}}}} - 1} \right)} }},$$
(25)
$$RMS{E_{pm,k}}^{2}=\frac{{\left\langle {\overline {{X_{{n+k}}^{{f,~pm}}}} - X_{{n+k}}^{{pm}},\overline {{X_{{n+k}}^{{f,~pm}}}} - X_{{n+k}}^{{pm}}} \right\rangle }}{{\left\langle {X_{{n+k}}^{{pm}},~X_{{n+k}}^{{pm}}} \right\rangle }}=\left( {1 - {p^{2k}}} \right)+\frac{1}{m}\left( {1 - {p^{2k}}} \right)+{p^{2k}}{c_{pm}}^{2},$$
(26)

where \({c_{pm}}^{2}=\left\langle {\varepsilon _{{i,n}}^{{pm}},~\varepsilon _{{i,n}}^{{pm}}} \right\rangle /\left\langle {X_{n}^{{pm}},~X_{n}^{{pm}}} \right\rangle\) is the ratio between the variance of initial error and X in perfect mode forecast.

With the assumption of perfect initial condition and infinite ensemble size, we derive Eqs. (9)–(13) as:

$$AC{C_{rw,k}}={r^k},$$
(27)
$$AC{C_{pm,k}}={p^k},$$
(28)
$$RMSE_{{rw,k}}^{2}={\left( {{p^k} - {r^k}} \right)^2}+\left( {1 - {r^{2k}}} \right),$$
(29)
$$RMSE_{{pm,k}}^{2}=\left( {1 - {p^{2k}}} \right).$$
(30)

Now, we consider an additional case, namely the perfect real world model case for the prediction of the truth of the real world. In this case the ACC and RMSE can be derived by simply replacing p of Eqs. (28) and (30) by r:

$$AC{C_{prwm,k}}={r^k},$$
(31)
$$RMS{E_{prwm,k}}^{2}=1 - {r^{2{\text{k}}}},$$
(32)

where ‘prwm’ denotes forecast by perfect real world model (PRWM). We can see that the ACC of PRWM forecast is identical to RW forecast, however, the difference of RMSE is:

$$RMS{E_{rw,k}}^{2} - RMS{E_{prwm,k}}^{2}={\left( {{p^k} - {r^k}} \right)^2} \geqslant 0,$$
(33)

This indicates that it is the forecast skill by PRWM (prediction in the perfect model as the real world), instead of PM (prediction in a biased model), provides the upper bound of actual skill. Therefore, a model provides the true upper bound for actual skill only if the model can produce the correct statistical property, at least, the persistence, as the real world.

Now we discuss the impact of sampling error on forecast skill, and explain why the percentage of RMSE in the 2nd quadrant is larger than that of ACC, as well as why the percentage ratio between the 4th and 1st quadrants are higher than that between the 2nd and 3rd quadrants. To simplify the discussion, we assume the initial error is zero. From Eqs. (23) and (25), the difference of ACC between RW and PM forecast can be written as:

$$AC{C_{rw,k}} - AC{C_{pm,k}}=\frac{{{r^k}}}{{\sqrt {1+\frac{d}{m}\left( {\frac{1}{{{p^{2k}}}} - 1} \right)} }} - \frac{{{p^k}}}{{\sqrt {1+\frac{1}{m}\left( {\frac{1}{{{p^{2k}}}} - 1} \right)} }}=\frac{1}{{\sqrt {1+\frac{1}{m}\left( {\frac{1}{{{p^{2k}}}} - 1} \right)} }}\left( {\frac{{{r^k}}}{a} - {p^k}} \right),$$
(34)

where

$$a=\sqrt {1+\frac{{(d - 1)\left( {\frac{1}{{{p^{2k}}}} - 1} \right)}}{{m+\left( {\frac{1}{{{p^{2k}}}} - 1} \right)}}} .$$
(35)

If r > p, the sign of \(AC{C_{rw,k}} - AC{C_{pm,k}}\) depends on the sign of (d − 1). When d < 1, we have a < 1 such that \(\frac{{{r^k}}}{a} - {p^k}>0~\) and \(AC{C_{rw,k}} - AC{C_{pm,k}}>0\), indicating that the points could not move from the 3rd quadrant (infinite sampling size) to the 2nd quadrant (finite sampling size). However, when d > 1, which is usually the case in FOAM, a < 1, then \(\frac{{{r^k}}}{a} - {p^k}\) might be smaller than zero, thus the points could shift into the 2nd quadrant.

In the case of r < p and d > 1, we have a > 1 and, in turn, \(\frac{{{r^k}}}{a} - {p^k}<0\). Thus, points could not move from the 1st quadrant into the 4th quadrant. When d < 1, \(\frac{{{r^k}}}{a} - {p^k}\) might be larger than zero, however, since this will rarely happen in FOAM, the percentage of 4th quadrant will be small.

The difference of RMSE is:

$$RMS{E_{pw,k}}^{2} - RMS{E_{rw,k}}^{2}=2{p^k}\left( {{r^k} - {p^k}} \right)+\frac{{1 - d}}{m}\left( {1 - {p^{2k}}} \right).$$
(36)

Since in FOAM for a large number of points d > 1, if r > p, \(RMS{E_{pw,k}}^{2} - RMS{E_{rw,k}}^{2}\) could be negative, then the points might move from the 3rd quadrant to the 2nd quadrant. When r < p and d > 1, \(~RMS{E_{pw,k}}^{2} - RMS{E_{rw,k}}^{2}\) < 0, the points will be constrained in the 1st quadrant. If d < 1, \(RMS{E_{pw,k}}^{2} - RMS{E_{rw,k}}^{2}\) could be smaller than zero, as discussed above, this will seldom occur.

Note that the sign of the RMSE difference is more sensitive to the sampling error than ACC, especially for small ensemble size and large forecast step k. This could be seen from Eqs. (34) and (36). For large k, the first tern in the righthand side of Eqs. (36) is a small term, while the second term approximates to \(\frac{{1 - d}}{m}\), thus the sign of (1 − d) directly determines the sign of the RMSE difference. However, for ACC, the sign of ACC difference depends not only on d, but also on \({r^k}\) and \(~{p^k}\). Consider an extreme case of \(k \to \infty\) and r > p, then \(a \to \sqrt d\), the sign of \(\frac{{{r^k}}}{a} - {p^k}\) reverses that of \({r^k} - {p^k}\) only when \(\sqrt d>{r^k}/{p^k}\).

Now we derive the ACC and RMSE by traditional perfect model approach (e.g., Kumar et al. 2014). For simplicity we discuss the case of infinite ensemble size with zero initial error.

Take one member of the RW ensemble as “truth”:

$$X_{{n+k}}^{{f,rw}}={p^k}X_{n}^{{rw}}+\mathop \sum \limits_{{j=0}}^{{k - 1}} \left( {{p^{k - j - 1}}\varepsilon _{{n+j}}^{{pm}}} \right).$$
(37)

Recall Eq. (21), with assumption of infinite ensemble size and zero initial error, the ensemble mean of RW forecasts is:

$$\overline {{X_{{n+k}}^{{f,~rw}}}} ={p^k}X_{n}^{{rw}}.$$
(38)

Then we have:

$$\left\langle {X_{{n+k}}^{{f,rw}},\overline {{X_{{n+k}}^{{f,~rw}}}} } \right\rangle ={p^{2k}}\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle ,$$
(39)
$$\left\langle {X_{{n+k}}^{{f,rw}},X_{{n+k}}^{{f,rw}}} \right\rangle ={p^{2k}}\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle +\left( {1 - {p^{2k}}} \right)\left\langle {X_{n}^{{pm}},X_{n}^{{pm}}} \right\rangle =\left[ {(1 - d){p^{2k}}+d} \right]\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle ,$$
(40)
$$\left\langle {\overline {{X_{{n+k}}^{{f,~rw}}}} ,\overline {{X_{{n+k}}^{{f,~rw}}}} } \right\rangle ={p^{2k}}\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle ,$$
(41)
$$\left\langle {X_{{n+k}}^{{f,rw}} - \overline {{X_{{n+k}}^{{f,~rw}}}} ,X_{{n+k}}^{{f,rw}} - \overline {{X_{{n+k}}^{{f,~rw}}}} } \right\rangle =\left( {1 - {p^{2k}}} \right)\left\langle {X_{n}^{{pm}},X_{n}^{{pm}}} \right\rangle =d\left( {1 - {p^{2k}}} \right)\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle ,$$
(42)

where \(d=\left\langle {X_{n}^{{pm}},X_{n}^{{pm}}} \right\rangle /\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle .\)

Then the ACC and RMSE of PM forecasts by traditional approach can be written as:

$$ACC_{{pm,k}}^{t}=\frac{{\left\langle {\overline {{X_{{n+k}}^{{f,~rw}}}} ,X_{{n+k}}^{{f,rw}}} \right\rangle }}{{\sqrt {\left\langle {\overline {{X_{{n+k}}^{{f,~rw}}}} ,\overline {{X_{{n+k}}^{{f,~rw}}}} } \right\rangle ,\left\langle {X_{{n+k}}^{{f,rw}},X_{{n+k}}^{{f,rw}}} \right\rangle } }}=\frac{{{p^{2k}}\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle }}{{{p^k}\sqrt {(1 - d){p^{2k}}+d} \left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle }}=\frac{{{p^k}}}{{\sqrt {(1 - d){p^{2k}}+d} }},$$
(43)
$${\left( {RMSE_{{pm,k}}^{t}} \right)^2}=\frac{{\left\langle {X_{{n+k}}^{{f,rw}} - \overline {{X_{{n+k}}^{{f,~rw}}}} ,X_{{n+k}}^{{f,rw}} - \overline {{X_{{n+k}}^{{f,~rw}}}} } \right\rangle }}{{\left\langle {X_{{n+k}}^{{f,rw}},X_{{n+k}}^{{f,rw}}} \right\rangle }}=\frac{{d\left( {1 - {p^{2k}}} \right)\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle }}{{\left[ {(1 - d){p^{2k}}+d} \right]\left\langle {X_{n}^{{rw}},X_{n}^{{rw}}} \right\rangle }}=1 - \frac{{{p^{2k}}}}{{(1 - d){p^{2k}}+d}}.$$
(44)

The difference of ACC and RMSE between traditional approach and our method for the perfect model forecast can therefore be written as:

$${\left( {ACC_{{pm,k}}^{t}} \right)^2} - \left( {ACC_{{pm,k}}^{{}}} \right)^2=\frac{{(1 - d)\left( {{p^{2k}} - {p^{4k}}} \right)}}{{\left( {1 - d} \right){p^{2k}}+d}},$$
(45)
$${\left( {RMSE_{{pm,k}}^{t}} \right)^2} - {\left( {RMSE_{{pm,k}}^{{}}} \right)^2}=\frac{{(d - 1)\left( {{p^{2k}} - {p^{4k}}} \right)}}{{\left( {1 - d} \right){p^{2k}}+d}},$$
(46)

and the difference of ACC and RMSE between perfect model forecast and real world forecast by traditional approach are:

$${\left( {ACC_{{pm,k}}^{t}} \right)^2} - {\left( {ACC_{{rw,k}}^{{}}} \right)^2}=\left( {{p^{2k}} - {r^{2k}}} \right)+\frac{{\left( {1 - d} \right)\left( {{p^{2k}} - {p^{4k}}} \right)}}{{\left( {1 - d} \right){p^{2k}}+d}},$$
(47)
$${\left( {RMSE_{{pm,k}}^{t}} \right)^2} - {\left( {RMSE_{{rw,k}}^{{}}} \right)^2}=2{p^k}\left( {{r^k} - {p^k}} \right)+\frac{{(d - 1)\left( {{p^{2k}} - {p^{4k}}} \right)}}{{\left( {1 - d} \right){p^{2k}}+d}}.$$
(48)

When d = 1, i.e. the same total variance in the real world and perfect model, the ACC and RMSE of traditional approach (Eqs. 43 and 44) are identical to our approach (Eqs. 28 and 30). If d > 1 (d < 1), i.e. the variance of the perfect model forecast is larger (smaller) than real world, the perfect skill of traditional approach is lower (higher) than that of our approach (Eqs. 45 and 46). Thus, for the traditional approach, the difference between the perfect skill and actual skill depends on not only on p and r, but also on the variances of the real world and perfect model. If \(~d \geqslant 1\) and \(p<r\), \({( {ACC_{{pm,k}}^{t}} )^2} - {( {ACC_{{rw,k}}^{{}}})^2}<0\) and \({( {RMSE_{{pm,k}}^{t}} )^2} - {( {RMSE_{{rw,k}}^{{}}} )^2}>0\), the skill-persistence rule holds for the traditional approach.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, Y., Rong, X. & Liu, Z. Potential predictability and forecast skill in ensemble climate forecast: a skill-persistence rule. Clim Dyn 51, 2725–2742 (2018). https://doi.org/10.1007/s00382-017-4040-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00382-017-4040-z

Keywords

Navigation