Appendix 1: QEANOVA Estimates of uncertainty components
We summarize the theoretical developments of the QEANOVA approach to achieve unbiased estimators of uncertainty components for a simplified configuration where the trend model is a simple linear function of time. The full developments are given in Hingray and Blanchet (2018) for the general configuration where the trend model is a linear combination of L functions of time. Here, we first consider the case where the number of members differs from one chain to the other; the simplified equations obtained when all chains have the same number are then given. For the sake of conciseness, we omit the subscript “\(QE\)” related to the QEANOVA approach.
Model.
We first consider the raw projections Y(g, m, t) with \(M_g\) members for each of the G chains, assuming that, for all \(t_s \le t \le t_f\):
$$\begin{aligned} Y(g,m,t)=\lambda (g,t) + \nu (g,m,t), \end{aligned}$$
(10)
where \(\lambda (g,t)\) is the trend model expressed as a linear function of time: \(\lambda (g,t)=\varLambda _{g1}+\varLambda _{g2}(t-t_s)\) and where the \(\nu (g,m,t)\) are independent and homoscedastic random variables (with variance \(\sigma _{\nu _g}^2\)). Second let consider the change variable at future prediction lead time \(t \in [t_c,t_f]\):
$$\begin{aligned} X(g,m,t)=Y(g,m,t)-Y(g,m,t_c)=\alpha (g,t)+\eta (g,m,t), \end{aligned}$$
(11)
where \(t_c\) is the reference period. We have thus \(\alpha (g,t)=\varLambda _{g2}(t-t_c)\) and \(\eta (g,m,t)=\nu (g,m,t)-\nu (g,m,t_c)\).
Unbiased estimation of the model parameters for the raw variable Y.
We discretize \([t_s,t_f]\) into T time steps (from \(t_s=t_1\) to \(t_f=t_T\)) and write \(t_c\) as the Kth time step (i.e. \(t_c=t_K\)). We are interested in the future prediction lead time \(t_k \in [t_K,t_T]\). Let consider the regression model (10) for a particular g. Unbiased estimators of the regression parameters \((\varLambda _{g1},\varLambda _{g2})\) are given by the least square estimates
$$\begin{aligned} ({\hat{\varLambda }}_{g1}, {\hat{\varLambda }}_{g2} )' ={\mathbb {V}}\; {\mathbb {R}}' \; \left( \frac{1}{M_g} \sum _{m=1}^{M_g} Y(g,m,t_1),\ldots ,\frac{1}{M_g} \sum _{m=1}^{M_g} Y(g,m,t_T)\right) ' \end{aligned}$$
where “”’ denotes the transpose , \({\mathbb {R}}\) is the \(T \times 2\) matrix of covariates whose kth row is \((1,t_k-t_1)\), for \(1 \le k \le T\), and \(\mathbb V= ({\mathbb {R}}' {\mathbb {R}})^{-1}\). Covariance matrix of the estimators \(({\hat{\varLambda }}_{g1}, {\hat{\varLambda }}_{g2})\) is given by \(\widehat{\sigma _{\nu _g}^2} M_g^{-1} {\mathbb {V}}\) where \(\widehat{\sigma _{\nu _g}^2}\) is an unbiased estimator of \(\sigma _{\nu _g}^2\) given by
$$\begin{aligned} \widehat{\sigma _{\nu _g}^2} = \frac{1}{TM_g-L} \sum _{m=1}^{M_g} \sum _{k=1}^T \left\{ Y(g,m,t_k)-{\hat{\lambda }}_{QE}(g,t_k)\right\} ^2 \end{aligned}$$
(12)
where \(L=2\) and \({\hat{\lambda }}_{QE}(g,t_k)={\hat{\varLambda }}_{g1}-{\hat{\varLambda }}_{g2}(t_k-t_1)\).
In particular, an unbiased estimator of \(\varLambda _{g2}^2\) is
$$\begin{aligned} \widehat{\varLambda _{g2}^2}={\hat{\varLambda }}_{g2}^2-\widehat{\sigma _{\nu _g}^2} M_g^{-1} V_{22}, \end{aligned}$$
(13)
where \(V_{22}\) is the element (2, 2) of \({\mathbb {V}}\). Considering \(t_1,\ldots ,t_T\) regularly spaced on \([t_s; t_f]\), we have \(V_{22}=12(T-1)/\{T(T+1)(t_T-t_1)^2\}\).
Unbiased estimation of the sample variance of the \(\alpha\)’s in the change variable.
Given (11) and (13), an unbiased estimator of the sample variance of \(\alpha (g,t_k)\), i.e. of \(s_\alpha ^2(t_k)=\frac{1}{G-1}\sum _{g=1}^G \{\alpha (g,t_k)\}^2\) , is
$$\begin{aligned} \widehat{s_\alpha ^2}(t_k)=s_{{\hat{\alpha }}}^2(t_k)-\frac{12}{T}\frac{T-1}{T+1}\left( \frac{t_k-t_K}{t_T-t_1}\right) ^2 \left( \frac{1}{G} \sum _{g=1}^G \frac{\widehat{\sigma _{\nu _g}^2}}{M_g} \right) , \end{aligned}$$
where
$$\begin{aligned} s_{{\hat{\alpha }}}^2(t_k)=\frac{(t_k-t_1)^2}{G-1}\sum _{g=1}^G {\hat{\varLambda }}_{g2}^2. \end{aligned}$$
When all GCMs have the same number of runs (M), this expression reduces to:
$$\begin{aligned} {\widehat{s}}_{\alpha }^2(t_k) = s_{{\hat{\alpha }}}^2(t_k) -\frac{A(t_k,{\mathcal {C}})}{M}{\widehat{\sigma }}_{\eta }^2, \end{aligned}$$
(14)
where \({\widehat{\sigma }}_{\eta }^2\) is an unbiased estimator of internal variability variance for X
$$\begin{aligned} {\widehat{\sigma }}_{\eta }^2= \frac{2}{G} \sum _{g=1}^G {\widehat{\sigma _{\nu _g}^2}} \end{aligned}$$
(15)
and where
$$\begin{aligned} A(t_k,{\mathcal {C}})= \frac{6 (T-1)}{T(T+1)} \left( \frac{t_k-t_K}{t_T-t_1}\right) ^2 \end{aligned}$$
(16)
Appendix 2: Estimates with a local QEANOVA approach
We here summarize the expressions of the different uncertainty estimators obtained with a local-QEANOVA approach. The full developments, similar to those presented in appendix A for the QEANOVA approach, are detailed in Hingray and Blanchet (2018).
Model: a regression model is still considered to estimate the response function \(\lambda (g, t)\) for Y in Eq. 10 but \(\lambda (g, t)\) is assumed to be only locally linear in time, in the neighborhoods of \(t_c\) and \(t_e\) respectively, i.e. on \([t_c-\omega ,t_c+\omega ]\) and \([t_e-\omega ,t_e+\omega ]\), where \(t_e \in [t_c,t_f]\) is the future prediction lead time under consideration. \(\lambda (g, t)\) can thus be expressed as
$$\begin{aligned} \lambda (g,t)=\left\{ \begin{array}{cc} \lambda _{c}(g,t)=\varLambda _{g1,c} + (t-t_c) \varLambda _{g2,c} &{} \text{ for } t_{c}-\omega \le t \le t_{c}+\omega ,\\ \lambda _{e}(g,t)=\varLambda _{g1,e} + (t-t_e) \varLambda _{g2,e} &{} \text{ for } t_{e}-\omega \le t \le t_{e}+\omega . \end{array} \right. \end{aligned}$$
(17)
The change variable X(g, m, t), for \(t_e-\omega \le t \le t_e+\omega\), in Eq. 11 is such that \(\alpha (g,t)=(\varLambda _{g1,e}-\varLambda _{g1,c})+(t-t_e)\varLambda _{g2,e}\).
Each interval \([t_c-\omega ,t_c+\omega ]\) and \([t_e-\omega ,t_e+\omega ]\) is discretized into \(T^{\star }\) regular periods of length \(dt=2 \omega /(T^*-1)\), with \(T^{\star }\) odd, giving respectively the sequences \(t_1,\ldots ,t_{T^*}\) and \(t_{T^*+1},\ldots ,t_{2T^*}\). The values of Y for these different times are further considered to estimate the regression coefficients of the linear trend models in Eq. 17. For the illustration given in Sect. 5.3, \(dt = \omega =20yrs\) and \(T^*=3\).
Unbiased estimators of model uncertainty and internal variability variance.
Following Hingray and Blanchet (2018), an unbiased estimator of model uncertainty variance, i.e. of the sample variance of \(\alpha (g,t_e)\), is
$$\begin{aligned} \widehat{s_\alpha ^2}(t_e) = s_{{\hat{\alpha }}}^2(t_e) - \frac{4}{T} \left( \frac{1}{G}\sum _{g=1}^G \frac{ \widehat{\sigma _{\nu _g}^2}}{ M_g}\right) \end{aligned}$$
(18)
where \(T=2T^*\) is the total number of time steps considered in the analysis and where \(\widehat{\sigma _{\nu _g}^2}\) is an unbiased estimator of \(\sigma _{\nu _g}^2\) given by
$$\begin{aligned} \widehat{\sigma _{\nu _g}^2} & =\frac{1}{TM_g-4} \sum _{m=1}^{M_g} \left[ \sum _{j=1}^{T^*} \left\{ Y(g,m,t_j)-{\hat{\lambda }}_{c}(g,t_j)\right\} ^2 \right. \end{aligned}$$
(19)
$$\begin{aligned}&\quad + \sum _{j=T^*+1}^{2T^*} \left. \left\{ Y(g,m,t_j)-{\hat{\lambda }}_{e}(g,t_j)\right\} ^2\right] \end{aligned}$$
(20)
with \({\hat{\lambda }}_{c}(g,t_j)={\hat{\varLambda }}_{g1,c}+{\hat{\varLambda }}_{g2,c}(t_j-t_c)\) and \({\hat{\lambda }}_{e}(g,t_j)={\hat{\varLambda }}_{g1,e}+{\hat{\varLambda }}_{g2,e}(t_j-t_e)\) where \({\hat{\varLambda }}_{g1,c}, {\hat{\varLambda }}_{g2,c}, {\hat{\varLambda }}_{g1,e}\) and \({\hat{\varLambda }}_{g2,e}\) are the regression coefficients of the two linear models in Eq. 17.
When all GCMs have the same number of runs (M), the expression of model uncertainty in Eq. 18 reduces to
$$\begin{aligned} \widehat{s_\alpha ^2}(t_e) = s_{{\widehat{\alpha }}}^2(t_e) - \frac{1}{MT^*} {\hat{\sigma }}_{\eta }^2 \end{aligned}$$
(21)
where \({\widehat{\sigma }}_{\eta }^2\) is an unbiased estimator of internal variability variance for X
$$\begin{aligned} {\widehat{\sigma }}_{\eta }^2= \frac{2}{G} \sum _{g=1}^G \widehat{\sigma _{\nu _g}^2}. \end{aligned}$$
(22)
Appendix 3: Simulation of MMEs
Each MME is simulated for Y(g, m, t) assuming that, for all \(t_s \le t \le t_f\):
$$\begin{aligned} Y(g,m,t)=\lambda (g,t) + \nu (g,m,t), \end{aligned}$$
(23)
where the climate response function \(\lambda (g,t)\) is a linear function of time and where the \(\nu (g,m,t)\) are independent and homoscedastic random variables (with variance \(\sigma _{\nu _g}^2\)). For convenience we further assume that \(\lambda (g,t)\) can be decomposed as \(\lambda (g,t)=w(t)+d(g,t)\), for \(t=t_1,\ldots ,T\) where the mean climate response w(t) of the G chains and the deviations d(g, t) of chain g are linear functions of time, expressed as: \(w(t)=B+P.(t-t_1)/(t_e-t_C)\) and \(d(g,t)=D(g).(t-t_1)/(t_e-t_C)\) with the constraint \(\sum _{g=1}^{ G }D(g)=0\).
For graphical simplification purposes, each MME for Y is constructed so that the parameters \((\sigma ^2_\nu , P, D(g), g=1,\ldots ,G )\) lead, for the change variable X at time \(t=t_e\), to a prescribed value of the Response-to-Uncertainty ratio [R2U(\(t_e\))] and to a prescribed value of the fractional variance \(F_\eta (t_e)\) due to internal variability.
For X, we have by definition \(\varphi (g,t)=\lambda (g,t)-\lambda (g,t_C)\) and \(\eta (g,m,t) = \nu (g,m,t)-\nu (g,m,t_C)\). We have thus for the change variable \(\mu (t)\) = \(w(t)-w(t_C)\), \(\alpha (g,t) = d(g,t)-d(g,t_C)\) and in turn \(\mu (t) = P.(t-t_C)/(t_e-t_C)\) and \(\alpha (g,t) = D(g).(t-t_C)/(t_e-t_C)\).
The theoretical values for \(\mu (t_e)\) and \(s^2_\alpha (t_e)\) are thus as follows : \(\mu (t_e) = P\) and \(s^2_\alpha (t_e)={\mathbb V\text {ar}}(D(g))\). Fixing P to 1, we thus simply require in turn
$$\begin{aligned}&\sigma ^2_X(t_e) = \frac{1}{[R2U(t_e)]^2}; \end{aligned}$$
(24)
$$\begin{aligned}&\sigma ^2_\nu (t_e) = \frac{1}{2} \sigma ^2_\eta (t_e) = \frac{1}{2} F_\eta (t_e).\sigma ^2_X(t_e); \end{aligned}$$
(25)
$$\begin{aligned}&{\mathbb V\text {ar}}(D(g))=s^2_\alpha (t_e)=(1-F_\eta (t_e)).\sigma ^2_X(t_e). \end{aligned}$$
(26)
For each MME simulation, the deviations of the different chains, \(D(g), g=1,\ldots ,G\), are obtained from a sample of G realizations in a normal distribution. These realizations are scaled so that their mean is zero and their variance corresponds to the prescribed value \(s^2_\alpha (t_e)\).