Climate Dynamics

, Volume 43, Issue 7–8, pp 1791–1810 | Cite as

ENSO, the IOD and the intraseasonal prediction of heat extremes across Australia using POAMA-2



The simulation and prediction of extreme heat over Australia on intraseasonal timescales in association with the El Niño–Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD) is assessed using the Bureau of Meteorology’s Predictive Ocean Atmosphere Model for Australia (POAMA). The analysis is based on hindcasts over 1981–2010 and focuses on weeks 2 and 3 of the forecasts, i.e. beyond a typical weather forecast. POAMA simulates the observed increased probabilities of extreme heat during El Niño events, focussed over south eastern and southern Australia in SON and over northern Australia in DJF, and the decreased probabilities of extreme heat during La Niña events, although the magnitude of these relationships is smaller than observed. POAMA also captures the signal of increased probabilities of extreme heat during positive phases of the IOD across southern Australia in SON and over Western Australia in JJA, but again underestimates the strength of the relationship. Shortcomings in the simulation of extreme heat in association with ENSO and the IOD over southern Australia may be linked to deficiencies in the teleconnection with Indian Ocean SSTs. Forecast skill for intraseasonal episodes of extreme heat is assessed using the Symmetric Extremal Dependence Index. Skill is highest over northern Australia in MAM and JJA and over south-eastern and eastern Australia in JJA and SON, whereas skill is generally poor over south-west Western Australia. Results show there are windows of forecast opportunity related to the state of ENSO and the IOD, where the skill in predicting extreme temperatures over certain regions is increased.


Intraseasonal forecasts Predictability El Niño–Southern Oscillation Indian Ocean Dipole Extreme events Heat waves 

1 Introduction

Extreme temperature events such as heat waves, although not unusual phenomena for the Australian continent, are responsible for more deaths in Australia than any other natural hazard including bushfires, storms and floods (Price Waterhouse Coopers 2011). A number of Australian regions have experienced significant numbers of heat-related deaths since the turn of the century. The Victorian bushfires in 2009, for example, were preceded by a severe heat wave; the Victorian Department of Health attributed 374 excess deaths to the event—78 more than for the corresponding period the previous summer (State of Victoria 2009). Similarly, 58 deaths were heat-induced during a severe heat event in Adelaide in 2009 (Mason et al. 2010). Other effects of heat extremes are, however, less quantifiable. For example, the total annual loss to the agricultural community as a result of extreme temperature events may not necessarily be directly caused by a heat wave, but by other factors such as water availability.

Studies of observed temperature extremes find that significant changes have been observed both globally (e.g. Tebaldi et al. 2006; Alexander et al. 2006; Kharin et al. 2007) and for the Australian region (e.g. Cai et al. 2007; Chambers and Griffiths 2008; Alexander and Arblaster 2009; Trewin and Vermont 2010) over the past century, and that Australia is likely to see a shift towards increased, more severe temperature extremes during the twenty-first century based on general circulation model (GCM) simulations (e.g. Alexander and Arblaster 2009; Seneviratne et al. 2012; White et al. 2013). This observed increase in unusually severe heat waves across many regions of Australia has led to a demand for longer-range forecasts of these types of events. The capability to predict the onset, maintenance and decay of extreme heat events beyond the typical 1-week forecast is a major challenge; however recent advances have been made in this area. Most notably, there is a limited but growing area of research and development aimed at filling the prediction gap between the typical 1-week weather forecasts and seasonal outlooks (i.e. the ‘sub-seasonal’ or ‘intraseasonal’ timescale). The World Meteorological Organisation (WMO) has recently recognised this potential for intraseasonal prediction through the implementation of a new sub-seasonal prediction project ( A key component of this project is to advance our understanding and prediction of extreme events, including heat waves. Prediction studies of heat extremes on the intraseasonal timescale have thus far been largely limited to case studies, such as the 2003 heat wave over Europe (Vitart 2005), the Russian heat wave in 2010 (Matsueda 2011) and the heat waves over Australia in 2009 (Hudson et al. 2011a).

Forecasts of extreme events on the intraseasonal timescale are potentially beneficial for a range of sectors of society, such as emergency management, energy (e.g. Roulston et al. 2003; Taylor and Buizza 2003), water resources management (e.g. Sankarasubramanian et al. 2009) and the financial markets and insurance (e.g. Zeng 2000; Jewson and Caballero 2003). For example, prediction of heat extremes during the summer months may assist in timely bushfire and excess heat warnings for high-risk communities, while prediction of winter extremes would be valuable to farmers making decisions related to the scheduling and management of irrigation, planting, harvesting and maintenance throughout the growing season. The agricultural community has, in particular, driven demand for intraseasonal forecasts of extreme heat events in Australia (CliMag 2009). Intraseasonal forecasts of extremes would add to existing climate information available to farmers and other sectors, assisting in better planning and preparedness for extreme events.

The Australian Bureau of Meteorology is currently engaged in developing the science and systems that could form an operational intraseasonal forecast service for Australia (Rashid et al. 2010; Hudson et al. 2011b; Marshall et al. 2011a, b; Hudson et al. 2013). This capability is based on the Predictive Ocean Atmosphere Model for Australia coupled model seasonal prediction system, POAMA. Hudson et al. (2011b) found that the forecast skill of rainfall on intraseasonal timescales was found to be increased during strong phases of the El Niño–Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD), indicating that these slow variations of boundary forcing should be considered a source of intraseasonal climate predictability. Therefore, in the current study, we examine the role of ENSO and the IOD on extreme heat events over Australia. A related paper focuses on heat extremes in association with drivers that operate on intraseasonal timescales, namely the Madden Julian Oscillation (MJO), the Southern Annular Mode (SAM) and atmospheric blocking (Marshall et al. 2013).

Several studies have examined the relationship between ENSO or the IOD to surface temperatures across the Australian region (e.g. Jones and Trewin 2000; Nicholls et al. 2010; Arblaster and Alexander 2012; Min et al. 2013). While ENSO is regarded as the primary driver of predictable interannual variations of rainfall across Australia (e.g. Risbey et al. 2009), the existence of either El Niño or La Niña conditions are also known to have a significant impact on both the frequency and pattern of heat extremes (Nairn et al. 2009; Min et al. 2013), which may lead to increased predictability of heat events across Australia during the different phases of ENSO. For example, increased cloudiness and rainfall associated with La Niña conditions typically reduces daytime temperatures but keeps night time temperatures higher. In contrast, El Niño events are associated with reduced cloudiness, which increases the likelihood of higher daytime temperatures. Alexander and Arblaster (2009) analysed trends in temperature extremes and noted, in terms of the driving mechanisms, that the strong influence of ENSO on the variability of the Australian climate allows for increased predictability on a seasonal timescale. Trewin (2009) also highlights that teleconnections between the frequency of extended heat waves and ENSO were sufficiently strong to indicate the prospect of a useful predictability index for the risk of heat waves on (at least) the seasonal timescale.

In this study, we assess the skill of the POAMA forecast system (version 2) at making predictions of heat extremes across Australia on the intraseasonal timescale. We assess weeks 2 and 3 of the forecast, i.e. beyond the period of a typical weather forecast. We define extreme heat events as occurring when the weekly or fortnightly averaged maximum temperature (Tmax) anomaly exceeds the 90th percentile (i.e. the upper decile), where the threshold is calculated from corresponding weekly or fortnightly averaged anomaly data. We focus on understanding the ability of POAMA to represent the teleconnections between ENSO, the IOD and heat extremes across Australia, and examine the predictability of heat extremes during the different phases of these drivers.

This paper is structured as follows. Section 2 describes the methods and data used in this study and details the POAMA forecast system. Section 3 describes the relationship between extreme heat over Australia and ENSO and the IOD respectively in both observations and POAMA. Section 4 documents POAMA’s skill in predicting heat extremes on the intraseasonal timescale across Australia, including identifying possible windows of forecast opportunity associated with the phases of ENSO and the IOD. Summary and concluding remarks are presented in Sect. 5.

2 Methods

2.1 The POAMA forecast system

We assess forecasts from the most recent version of POAMA (version 2). Refer to Hudson et al. (2013) for full details of the model, data assimilation and ensemble generation (in this study, we are using the system referred to in their paper as P2-M). In brief, POAMA is a fully coupled ocean–atmosphere model and data assimilation system used for intraseasonal to seasonal prediction at the Bureau of Meteorology. The atmospheric model component of POAMA has a T47 horizontal resolution with 17 vertical levels. This horizontal resolution, together with the grid configuration, means that Tasmania is not resolved as land; therefore our analysis is restricted to the mainland Australian continent. The land surface component is a simple bucket model for soil moisture (Manabe and Holloway 1975) and has three soil levels for temperature (Hudson et al. 2011c). The ocean model is the Australian Community Ocean Model version 2 (ACOM2) (Schiller et al. 1997, 2002), and is based on the Geophysical Fluid Dynamics Laboratory (GFDL) Modular Ocean Model (MOM version 2). The ocean grid resolution is 2° in the zonal direction and 0.5° in the meridional direction at the Equator, which gradually increases to 1.5° near the poles. The atmosphere and ocean models are coupled using the Ocean Atmosphere Sea Ice Soil (OASIS) coupling software (Valcke et al. 2000).

Forecasts are initialised from observed atmospheric and oceanic states. POAMA obtains ocean initial conditions from the POAMA Ensemble Ocean Data Assimilation System (PEODAS; Yin et al. 2011a) and atmosphere and land initial conditions from the atmosphere–land initialisation scheme (ALI; Hudson et al. 2011c). To address model uncertainty, POAMA has adopted a pseudo multi-model ensemble strategy using three different configurations of the atmospheric model. A 33-member ensemble, generated in ‘burst’ mode (i.e. all initial conditions are valid for the same date and time, with no lagged initial conditions), is run for each forecast case. The 33-member ensemble comprises an 11-member ensemble for each of the three model versions. Perturbations to the atmosphere and ocean initial conditions are produced by a coupled-model breeding scheme (Hudson et al. 2013; Yin et al. 2011b).

Hindcasts (retrospective forecasts) are generated three times per month for the period 1981–2010. Forecast skill is assessed using anomalies from the hindcast climatology. These anomalies are created by producing a lead-time dependent ensemble mean climatology from the hindcasts. This climatology is a function of both start date and lead time, and thus a first-order linear correction for model mean bias is made (e.g. Stockdale 1997).

2.2 Verification methodology for heat extremes forecasts

Verification of extreme events poses many challenges associated with their rarity, small sample sizes and large uncertainties, as has been discussed and reviewed by Casati et al. (2008) and Ferro and Stephenson (2012). Many standard verification scores (e.g. proportion correct, Critical Success Index, Heidke Skill Score, Peirce Skill Score, Equitable Threat Score, Relative Operating Characteristic Skill Score, Brier Score and others) are degenerate for rare events (i.e. as rarity increases, the scores tend towards a meaningless limit, usually zero, irrespective of whether the forecast system is skilful or not) (Hogan and Mason 2012; Ferro and Stephenson 2012). This degeneracy means that the scores are not good for comparing the skill of a forecast system for different event thresholds (e.g. the skill of above tercile forecasts with the skill of above decile forecasts). Hogan and Mason (2012) present a table which summarises the attributes of different performance measures in terms of their desirable properties, such as equitability, difficulty to hedge, base-rate independence and non-degeneracy for rare events, amongst others. The one score that combined more desirable properties than any other they examined, including non-degeneracy for rare events, was the Symmetric Extremal Dependence Index (SEDI). The SEDI is one of a number of scores that have recently been proposed as being appropriate for assessing the skill of deterministic forecasts of rare binary events (Ferro and Stephenson 2011). We have applied the SEDI score to our forecasts.

The SEDI score, proposed by Ferro and Stephenson (2011), is based on a 2 × 2 contingency table and is computed from the hit rate (H) and the false alarm rates (F) at each grid location, using the equation:
$$SEDI = \frac{\log F - \log H - \log (1 - F) + \log (1 - H)}{\log F + \log H + \log (1 - F) + \log (1 - H)}$$

A forecast is deemed to be a “hit” if it and the corresponding observation both exceed a particular threshold (e.g. the 90th percentile) and a “false alarm” if the forecast exceeds the threshold but the observed does not. SEDI scores greater (less) than zero indicate skill better (worse) than for random forecasts. For full details of the SEDI score, refer to Ferro and Stephenson (2011).

To calculate the SEDI scores in each grid box to enable regional performance to be assessed, we use all 33 ensemble members and all three forecast start dates (aggregated into months based on start dates of the 1st, 11th and 21st for each month) in the period 1981–2010. We make the assumption that the forecasts starting 10 days apart in each month represent independent “heat” events. This is reasonable given that we are verifying intraseasonal forecasts, typically consisting of weekly-averaged periods. There are three possible ways in which we could evaluate our ensemble of forecasts. We could construct the 2 × 2 contingency table (a) based on the ensemble mean forecast exceeding the threshold; (b) based on each ensemble member individually and then averaging the SEDI scores from each member at the end; or (c) based on each individual ensemble member’s forecast exceeding the threshold (i.e. the ensemble members are pooled and each adds to the counts in the contingency table). Option (a), using the ensemble mean, is not desirable for assessing an extreme event as it is likely to underestimate the frequency of occurrence of the event. Option (b) is potentially interesting for gauging the uncertainty in the SEDI across ensemble members. However, when applied to our study, we experienced issues with small sample sizes resulting in undefined SEDI scores (one or more entries in the contingency table equalled zero) for a number of grid boxes. In addition, there may be issues associated with averaging non-linear scores such as the SEDI. We have opted for option (c), pooling all the ensemble members.

The confidence interval of the SEDI can be estimated using the formula for the standard error given in Ferro and Stephenson (2011). Given that all the ensemble members are contributing to the contingency table, the total sample size is equal to the number of forecast events multiplied by 33. For the calculation of the confidence interval, we account approximately for the non-independence of our ensemble members (this is particularly an issue at the forecast lead times examined in this paper) by computing an effective sample size, \(N_{eff} \left( { \cong N\frac{1 - \rho }{1 + \rho }} \right)\), based on the correlation ρ of the ensemble members Tmax at each grid box and lead time. (Note: As forecast lead time increases the ensemble members become more independent and Neff approaches N). The correlation coefficient is calculated by correlating (as a function of hindcast start and lead time) each ensemble member with the ensemble mean of the remaining 32 ensemble members at each grid box. The correlation coefficients are then averaged over the 33 realisations. For example, when assessing the skill of upper decile forecasts in winter for the fortnight comprising weeks 2 and 3 of the forecast (e.g. see Fig. 8, third row), the sample size is 8,910 (i.e. 30 years by 3 months by 3 start-dates per month by 33 ensemble members). Calculation of an effective sample size in this example reduces the sample size by on average a factor of 4.5 over Australia. We also examine the skill for weeks 2 and 3 of the forecast (i.e. the skill of a hot week, rather than a hot fortnight). In this case, weeks 2 and 3 contribute to the contingency table. We compute 95 % confidence intervals for the SEDI scores at each grid point, and on the figures shade positive SEDI scores (i.e. better than random forecasts) whose confidence intervals do not include zero.

Since the SEDI score is a recently proposed score, we first compare the skill for forecasts of heat above the upper tercile (i.e. the 66th percentile) obtained using the SEDI score with that from a commonly used metric, the Relative Operating Characteristic (ROC) Skill Score (ROCSS; Wilks 2006). However, we do not use the ROC score to examine the forecasts of more extreme events (e.g. upper decile) since it is known to be degenerate for rare events (Hogan and Mason 2012). The ROCSS measures the ability of the forecasting system to discriminate between events and non-events, thereby providing information on forecast resolution. The ROCSS is calculated from the ROC area Az (\(ROCSS = 2A_{z} - 1\)), and ranges from −1 to 1, where scores greater than zero indicate skill better than for random forecasts (Wilks 2006). We calculate the Az using 10 equally-sized forecast probability bins and statistical significance is determined using the Mann–Whitney U-statistic (Mason and Graham 2002; Wilks 2006). For both the SEDI and ROC calculations, the tercile or decile thresholds are calculated using anomaly data from all the ensemble members and the threshold is dependent on the forecast start-date, lead time and grid box. To determine the occurrence or non-occurrence of an event, the forecasts are compared to the model’s threshold and the observed data are compared to the observed threshold. Calculation of the thresholds for the ROC calculation from both the model and observations are subject to leave-one-out cross validation. However, for the SEDI calculation, the thresholds are not cross-validated. This is because the SEDI score should be calculated on recalibrated forecasts that do not have a frequency forecasting bias (the frequency bias should equal one) such that the number of observed events should equal the number of forecasted events (Ferro and Stephenson 2011). Calculating the thresholds in a cross-validated manner can mean that there may be a frequency bias, as will be shown in Sect. 4. As such, in order to provide a complete summary of forecast performance, the frequency bias should be reported alongside the SEDI score (Ferro and Stephenson 2011). The frequency bias B, also called the bias ratio, is the number of ‘yes’ forecasts to the number of ‘yes’ observations, such that unbiased forecasts exhibit B = 1, indicating that the event was forecast the same number of times as it was observed (Wilks 2006).

We compare the model’s forecast of Tmax to observations using the Australian Water Availability Project (AWAP) observed gridded dataset (Jones et al. 2009), regridded to POAMA’s T47 grid.

3 Relationship between the large-scale drivers and heat extremes over Australia

3.1 Correlation with mean maximum temperature

Much of the predictable component of longer-range climate variability for Australia can be attributed to teleconnections driven by anomalous convection that is forced by SST anomalies (e.g. Stockdale et al. 1998). ENSO is a coupled ocean–atmosphere interaction in the Pacific Ocean, and is also a major source of remote forcing of SST variability in the Indian Ocean (Klein et al. 1999; Alexander et al. 2002), therefore the predictability of SSTs in the tropical Indian Ocean is linked to (but also limited by) the ability to predict ENSO (Zhao and Hendon 2009). POAMA can skilfully predict ENSO out to two or three seasons (Wang et al. 2011). However, ENSO is not the only important source of low-frequency climate variability that may be potentially predictable (Zhao and Hendon 2009), with the Indian Ocean having its own mode of coupled ocean–atmosphere variability—the Indian Ocean Dipole (IOD; e.g. Saji et al. 1999; Webster et al. 1999). The IOD is much less predictable (practically and theoretically) than ENSO (e.g. Luo et al. 2008; Wajsowicz 2007; Zhao and Hendon 2009). POAMA can skilfully predict the peak phase of the occurrence of the IOD in spring with about 4 months lead time (Zhao and Hendon 2009). Prior to examining the teleconnection between these drivers and extreme heat, we examine their relationship with weekly-averaged mean Tmax in order to highlight regions where ENSO and the IOD have the most significant representation in observed mean maximum temperatures.

Both the model and AWAP observations are correlated with an observed ENSO index. The index used is the monthly (3-month running mean imposed) ENSO index from the US National Weather Service Climate Prediction Center (CPC; for 1981–2010. For the IOD, a monthly mean index is calculated using the PEODAS ocean reanalysis (Yin et al. 2011a; Xue et al. 2012) for the same period. The four consecutive weekly averages of AWAP Tmax in a given month are paired with the ENSO (or IOD) index for that given month (i.e. the index is repeated four times). Correlations between POAMA’s weekly-averaged Tmax and the monthly ENSO (or the IOD) index are determined using all ensemble members and were computed for weeks 1 through to 4 of the forecast respectively. In most regions, the teleconnections are fairly stable with lead time (not shown), suggesting that there is not much drift in the relationship in the first month of the forecast. The figures show the average correlation for weeks 2 and 3 of the forecast. Correlation coefficients are first transformed using the Fisher z-score transformation (Fisher 1915) and then averaged. Statistical significance is based on an estimate of the effective sample size, Neff, thus:
$$N_{eff} = N\left( {\frac{{1 - r_{1} r_{2} }}{{1 + r_{1} r_{2} }}} \right)$$
where r1 and r2 are the lag-1 autocorrelations of the timeseries being correlated (Bretherton et al. 1999). The lag-1 autocorrelation of consecutive (not running mean) weekly-averaged Tmax anomalies is close to zero over Australia in all seasons (or, put another way, the e-folding timescale is less than 1 week; not shown). When computed for all months for the 1981–2010 period, the lag-1 autocorrelation averaged over Australia of consecutive weekly-mean Tmax anomalies is 0.07. We use this as the estimate for r1. Similarly, the lag-1 autocorrelation of the CPC ENSO index for all months in the 1981–2010 period is 0.97 and this is our estimate of the ENSO weekly autocorrelation, r2. This means that our sample size is scaled by a factor of 0.87. For the AWAP observations the sample size is N = 360 (4 weeks by 3 months by 30 years) and the Neff = 313. For the model, the sample size is larger (since we use all ensemble members), but we have been conservative used the same Neff as for the observations in the significance calculation.
Figure 1 shows correlations between weekly-averaged Tmax and ENSO across Australia for each season. Correlations are well represented in winter (JJA) and spring (SON), with POAMA capturing the positive response to ENSO over eastern and south-eastern Australia in both seasons. POAMA also captures the positive correlations with the ENSO index over northern Australia in autumn (MAM). In summer (DJF), POAMA underestimates the strength of the teleconnection over much of western, northern and eastern Australia, with the exceptions of the Top End and Cape York. The observed pattern of correlations with ENSO is similar to that found by Jones and Trewin (2000) and Min et al. (2013) for the relationship between ENSO and seasonal mean maximum temperatures. Jones and Trewin (2000) found that the strongest correlations occur over eastern Australia year round and in the tropics in summer. They note that this is where there are the strongest correlations with rainfall, also reflected in Risbey et al. (2009), such that where maximum temperature correlates positively with ENSO, it correlates negatively with rainfall, suggesting that much of the variability in maximum temperature is related to variability in rainfall and cloud cover. The change in the sign of the correlation over northern Australia between summer/autumn and winter/spring (Fig. 1) is also reported by Jones and Trewin (2000). The positive correlation with the ENSO index in DJF and MAM over northern Australia (Fig. 1) is related to strong links with rainfall variability (Jones and Trewin 2000). In contrast, in JJA and SON, rainfall variations play an insignificant role in maximum temperature variability and the weakly negative correlations reflect variations in the trade winds and related temperature advection into the region (Jones and Trewin 2000). Overall, the magnitude of the correlations in our study, using weekly-averaged Tmax, are much smaller compared to the seasonal Tmax correlations shown in Jones and Trewin (2000), Min et al. (2013) that analyse JJA and SON, or Risbey et al. (2009) that look at seasonal rainfall. This is likely due to reduced noise when using seasonal timescale temperature or rainfall data. The small correlations with weekly mean Tmax (Fig. 1) indicate that ENSO only accounts for a small proportion of the variance of weekly mean Tmax. Other drivers, particularly those that operate on weekly or intraseasonal timescales, are likely to play a larger role.
Fig. 1

Correlation between the ENSO index and weekly-averaged Tmax anomalies from AWAP observations (top row) and from POAMA (bottom row) for 1981–2010. Correlations from POAMA are calculated from weeks 2 and 3 of the forecast. Correlations significantly different from zero are stippled (5 % significance level)

The relationship of weekly-averaged Tmax with the IOD is assessed for JJA and SON, as these are the seasons when the IOD is most active. Like ENSO, the relationship between weekly-averaged Tmax and the IOD is generally well represented by POAMA, in as much as there is a positive correlation with the IOD over much of the southern half of Australia in both seasons (Fig. 2). This positive correlation is probably related to variability in rainfall and cloud cover, since rainfall is negatively correlated with the IOD over the southern half of Australia at these times of year (Risbey et al. 2009), which is particularly strong in southwest Western Australia (Samuel et al. 2006) and eastern Australia (Verdon and Franks 2005) during the winter months. Min et al. (2013) obtain similar patterns of influence of the IOD on seasonal mean maximum temperature. In SON, the model exhibits similar strength positive correlations over south-eastern Australia compared to observations. However, POAMA underestimates the correlation between the IOD and weekly-averaged Tmax in both seasons over Western Australia and overestimates the strength of the correlation over south-eastern Australia in the winter months.
Fig. 2

As for Fig. 1, but for the IOD in winter (JJA) and spring (SON) seasons only

3.2 Exceedance probabilities for upper decile maximum temperature

We construct composites of multiple events, stratified by the phase or strength of the driver of interest, and estimate the likelihood of an extreme temperature threshold being exceeded within each phase to examine how well POAMA simulates the relationship between the large-scale drivers and extreme heat. In order for a skilful forecast of a particular driver to translate into a skilful forecast of regional temperature extremes, POAMA needs to be able to correctly represent this relationship. As with the correlations, the composites do not give us a rigorous assessment of the model’s forecasting ability, since we are not doing a one-to-one comparison of respective observed and forecast events; however, instead they tell us how well the model can capture the teleconnection between the driver and Tmax extremes.

We define periods of excessive heat as weekly-averaged Tmax anomalies that exceed the upper decile threshold (Fig. 3). In general, POAMA exhibits greater extremes in anomalous temperatures than observed when comparing values of the 90th percentile from POAMA with AWAP observations (Fig. 3). This bias is particularly strong over the south-east in SON and DJF and over northern Australia in DJF. The spatial pattern is generally similar to observed in JJA and MAM, but in DJF POAMA is too extreme over much of the northern half of the continent, where a dry bias in POAMA results in less moisture and cloud in this region in DJF allowing for higher temperature extremes. In SON, the highest threshold values are located over eastern rather than central Australia. Accordingly, the composites for the model are calculated with respect to the model’s threshold value and for observations with respect to the observed threshold value. The same is done for the forecast verification later in this paper. This approach is common practise in both intraseasonal and seasonal prediction.
Fig. 3

Values of the upper decile (90th percentile) thresholds for weekly-averaged Tmax anomalies from AWAP observations (top row) and POAMA (bottom row) for 1981–2010 in each of the four seasons. Threshold values from POAMA are calculated from weeks 2 and 3 of the forecast

For ENSO, the AWAP observations and POAMA ensemble hindcasts are stratified into El Niño and La Niña months using the aforementioned ENSO index. Thresholds of ±0.5 °C are used to define the El Niño/La Niña cases ( For the IOD, the data are similarly stratified into strongly positive and strongly negative cases, calculated from PEODAS (Yin et al. 2011a), using thresholds of ±1 standard deviation (SD) of the IOD about the mean. For both ENSO and the IOD, a probability of exceedance is then calculated for each composite by counting the number of weeks at each grid location for which the weekly-averaged anomaly is greater than the 90th percentile, and then dividing by the total number of weeks in that composite to form a probability. We display the probability as a ratio to the mean probability (i.e. 10 %) such that values greater (less) than one indicate an increased (reduced) chance of extreme heat. We assess whether the ratios are statistically significantly different from one using the z-score test for event probabilities (Spiegel 1961). For determining the sample size, we assume that the consecutive (not running means) weekly-averaged data in the composites are independent. As mentioned previously, the lag-1 autocorrelation of weekly-averaged Tmax anomalies is close to zero over Australia in all seasons. As for Sect. 3.1, the stratification is based on monthly data and for the AWAP observations, the four consecutive weeks within a particular month contribute to the composite. For POAMA, we construct the composites using weeks 2 and 3 of the forecasts, although the composites were also assessed over weeks 1–4 individually and in most cases the teleconnections are fairly stable with lead time (not shown). For the POAMA composites, each of the ensemble members contribute to the composites, which significantly increases the sample size compared to observations. However, as described in Sect. 2.2 in relation to forecast skill, the ensemble members are not independent and we account for the non-independence of POAMA ensemble members by computing the effective sample size based on the correlation of the ensemble members Tmax at each grid box (see Sect. 2.2).

El Niño events are associated with a warming of the eastern tropical Pacific region and a cooling of the SSTs around northern Australia, and are typically associated with reduced rainfall and cloudiness and generally higher temperatures across southern and eastern Australia in JJA and SON, and northern regions in DJF. This relationship in the mean is also reflected for extreme maximum temperatures (Fig. 4a–c), and the spatial patterns of the exceedance probabilities for extreme Tmax (Fig. 4) closely resemble the patterns of correlations for mean Tmax (Fig. 1) for both observations and the model in all seasons. This agrees with the result obtained by Min et al. (2013) for their comparison of the role of ENSO with seasonal mean Tmax and rainfall, and extreme daily Tmax and rainfall over Australia. As they mention, this suggests that the same teleconnection mechanisms are operating for extremes as for the mean (Cai et al. 2011). Interestingly however, the observed signal over south-western Australia in SON (Fig. 4b, and taking into account Fig. 5b) is relatively much stronger than it is when looking at mean Tmax (Fig. 1b).
Fig. 4

Composites of weekly-averaged Tmax exceedance probabilities during El Niño months for AWAP observations (top row) and weeks 2 and 3 of the POAMA forecasts (bottom row) for 1981–2010 in each of the four seasons. Probabilities refer to the chance of weekly-averaged Tmax anomalies exceeding the 90th percentile and are expressed as a ratio to the mean probability (~10 %). Values >1 indicate that the probability of exceeding the 90th percentile is increased; values <1 indicate that the probability of exceeding the 90th percentile is reduced. Significant probabilities are stippled (10 % significance level)

Fig. 5

As for Fig. 4, but for La Niña months

During El Niño events (Fig. 4), the observed probability of weekly-average temperatures exceeding the 90th percentile is nearly doubled (compared to the mean expected probability of the event) across parts of eastern Australia in JJA (although not statistically significant in this season), northern Australia in DJF and eastern and southern Australia in SON, with the latter being the most significant, particularly across the south-east (Fig. 4b). POAMA captures these signals, although generally weaker than observed. This is particularly the case over much of southern and south-western Australia in SON (Fig. 4f). Min et al. (2013) do not, however, show any signal over south-western Australia in SON when they looked at the relationship between ENSO and extreme daily Tmax. POAMA’s underestimation of the signal over southern and south-eastern Australia in SON may be related to deficiencies in the teleconnection to the Indian Ocean. The impact of El Niño on rainfall over south-eastern Australia is thought to come primarily from Rossby wave trains emanating from the Indian Ocean (Cai et al. 2011). This was confirmed by Min et al. (2013) when they examined the impact of ENSO on extreme temperature with the influence of the IOD excluded, and found that much of the signal over southern Australia was removed.

To first order, the response of Tmax extremes during La Niña events (Fig. 5) is opposite that of during El Niño events. Arblaster and Alexander (2012) also found strong opposite responses in extreme Tmax over Australia between strong El Niño and La Niña events. In contrast to El Niño, La Niña events cause an intensification of the Walker Circulation and produce increased convection, which generally brings increased rainfall and cooler temperatures to Australia. The observed probability of weekly-averaged Tmax exceeding the 90th percentile is reduced compared to the mean expected probability in all seasons across most of Australia (although very few regions show a statistically significant response; Fig. 5). POAMA captures this relationship between La Niña and extreme heat fairly well spatially in DJF (Fig. 5g), SON (Fig. 5f) and MAM (Fig. 5h), although there is a general underestimation of the strength of this relationship. In JJA there is an increased probability of extreme heat over far northern Australia under La Niña conditions (significant in the model, but not in observed), which is a similar response to that found for seasonal extremes of Tmax by Min et al. (2013; their Fig. 4).

The IOD is understood to be linked to ENSO through an extension of the Walker Circulation, but it also develops independently (Cai et al. 2005; Fischer et al. 2005; Zhong et al. 2006). Low-frequency climate variability and, importantly for this study, its teleconnections to anomalies across the Australian region amongst others (e.g. Ansell et al. 2000; Saji and Yamagata 2003; Meyers et al. 2007) are generated independently meaning that the IOD is a source of predictability beyond that of ENSO (Zhao and Hendon 2009). The positive phase of the IOD shows a strong relationship with weekly-averaged extreme temperatures (Fig. 6). The probability of exceeding the 90th percentile is increased across most of Western Australia in JJA (Fig. 6a) and across all of southern and central Australia in SON (Fig. 6b). As was shown for ENSO, this relationship is very similar to what was found for weekly-averaged Tmax (Fig. 2). In both seasons, POAMA captures the signal of an increased probability of extreme heat, but it is significantly underestimated. POAMA’s deficiency over southern Australia is less clear for the negative IOD case (Fig. 7). POAMA simulates the observed reduced probabilities of extreme heat across much of central and southern Australia in both seasons for the negative IOD, although, the signal is slightly smaller than observed, particularly in SON (Fig. 7). The most northerly parts of the continent in JJA (Fig. 7a) are the only regions to exhibit an observed increase in probabilities of exceeding the 90th percentile during the negative phase of the IOD, and POAMA captures this signal. POAMA does better in simulating the teleconnections with the negative phases of the IOD across both JJA (Fig. 7c) and SON (Fig. 7d) compared to the positive phase (Fig. 6).
Fig. 6

As for Fig. 4, but for strongly positive phases of the IOD in winter (JJA) and spring (SON) seasons only

Fig. 7

As for Fig. 4, but for strongly negative phases of the IOD in winter (JJA) and spring (SON) seasons only

4 Forecast skill of extreme heat over Australia

4.1 Overall skill

Before assessing the skill in predicting heat extremes over Australia associated with ENSO or the IOD, we examine the skill of all years combined. As mentioned in Sect. 2.2, we verify the forecasts of extreme heat using the SEDI score. The SEDI score is also appropriate for verifying less extreme forecasts since not only is it non-degenerate for rare events; it is also non-degenerate for overwhelmingly common events (Hogan and Mason 2012). As a basis for comparison of the SEDI score, we compute the skill of less extreme forecasts using the SEDI score and compare it to the skill computed using a commonly used metric, the ROCSS. The top two rows of Fig. 8 show the skill of forecasting above the upper tercile of Tmax for a fortnight (2-week period) with a one week lead time (i.e. weeks 2 and 3, or days 8–21 of the forecast) using the ROCSS and the SEDI score respectively. This comparison shows that both metrics highlight comparable regions of skill, or discrimination, in every season. The remainder of this paper uses the SEDI score and not the ROCSS, since the ROCSS is degenerate for rare events and is therefore sensitive to event threshold (Stephenson et al. 2008; Hogan and Mason 2012).
Fig. 8

Skill scores for forecasts of Tmax starting in each of the four seasons (DJF, MAM, JJA SON) in the hindcast period (1981–2010). The skill of forecasting above the upper tercile for the fortnight comprising weeks 2 and 3 of the forecast (i.e. a hot fortnight) is shown in the top row using the ROCSS and in the second row using the SEDI score. The third and fourth rows show the SEDI score for more extreme temperatures, predicting above the upper decile, for the fortnight comprising weeks 2 and 3 of the forecast (i.e. an extremely hot fortnight) and for weeks 2 and 3 (i.e. an extremely hot week) respectively. Scores significantly greater than zero are shaded (5 % significance level); contour interval is 0.1

The third row of Fig. 8 shows the SEDI skill of forecasting an extremely hot (upper decile) fortnight; skill significantly better than for random forecasts is indicated. In general, the skill of forecasting upper decile Tmax is focussed over northern, eastern and south-eastern Australia. The skill is generally lower for forecasting extremes (third row, Fig. 8), but areas that are skilfully predicted for extreme temperature are generally coincident with areas that are skilfully predicted for less extreme temperatures (second row, Fig. 8). This suggests that, in general, the same climate drivers and processes are acting to provide skill in the mean and in the extreme. One exception to this appears to be over Western Australia. The skill drops away relatively more in this region as one moves to more extreme temperatures compared to other regions. This may be related to model deficiencies over this region, which becomes more apparent with extreme temperatures. For example, deficiencies over Western Australia in the teleconnection between the positive IOD and extreme temperature in JJA have already been noted in the previous section. There is virtually no forecasting frequency bias (B = 0.97 in all seasons) for forecasts above the upper tercile, but the model exhibits an under-forecasting frequency bias for Tmax above the upper decile (B = 0.84 in all seasons, i.e. the event was forecast less often than it was observed).

The statistically significant skill of predicting a hot fortnight comprised of weeks 2 and 3 of the forecast is greater than the skill of a hot week (using weeks 2 and 3) (compare rows 3 and 4 of Fig. 8; the latter shows the skill of weeks 2 and 3 of the forecast), since an increase in the averaging period reduces noise. There is very little significant skill beyond the third week of the forecast, although there is some skill in fortnight 2 (comprising weeks 3 and 4; not shown). For weeks 2 and 3 of the forecast (as well as the fortnight comprising weeks 2 and 3), the model is most skilful overall in MAM and JJA, although there is locally high skill over south-eastern Australia in SON (Fig. 8). The highest skill over northern Australia occurs in MAM and JJA, with the highest skill over south-eastern and eastern Australia occurring during JJA and SON. In general, the skill is poor over south-west Western Australia. Again, this may be related in part to the aforementioned deficiencies in the model in capturing the teleconnection between the IOD and Tmax in this region.

4.2 Contribution to skill from the large-scale drivers

This section examines the contribution of ENSO and the IOD to the forecast skill shown in Fig. 8, and highlights the times, or windows of forecast opportunity, when we can expect skill higher than that shown in Fig. 8 (or conversely, informs of times when the system is less skilful). We focus on the skill of a hot week rather than fortnight, and assess the skill of both weeks 2 and 3 of the forecast. As in Sect. 3.2, forecasts are stratified according to the state of ENSO or the IOD based on monthly data (i.e. forecasts that start on the 1st, 11th and 21st of a given month will all fall within the same stratification sample). Determination of ENSO or IOD phase is done as in Sect. 3.2. We note that forecasts that start within the same month, although probably representing independent heat events, are not independent in terms of the phase of ENSO or the IOD. By including multiple forecasts from each month we are reducing the sampling noise that would be associated with sampling fewer heat events, but we are still restricted by the sampling noise of having a limited number of ENSO and IOD events within the 30-year period.

Figure 9 shows the skill of forecasting hot weeks during ENSO periods (top row) and during ENSO–neutral periods (second row). The clearest signal is found in winter (JJA). When forecasts are initialised during ENSO periods in JJA (Fig. 9a), the forecasts are more skilful over much of northern, eastern and western Australia compared to when forecasts are initialised in neutral periods (Fig. 9e). The increased skill over northern Australia seems to come mostly from La Niña periods (Fig. 9m) and over eastern and south-eastern Australia from El Niño periods (Fig. 9i). In these regions and at these times, there is a tendency for an increased chance of extreme heat, which POAMA faithfully represents (Figs. 4, 5).
Fig. 9

Skill (SEDI scores) for forecasts of weekly-averaged Tmax above the upper decile stratified by strong ENSO events (i.e. La Niña and El Niño events; top row), neutral ENSO events (second row), El Niño events (third row) and La Niña events (bottom row), for forecasts starting in each of the four seasons (DJF, MAM, JJA SON) in 1981–2010. Panels show the skill for weeks 2 and 3 of the forecast (i.e. the skill of forecasting an extremely hot week). Scores significantly greater than zero are shaded (5 % significance level) and the contour interval is 0.1

In SON, there is a larger, more consistent area of significant skill over eastern Australia in general during ENSO periods compared to neutral periods (Fig. 9b, f), which seems to be more from La Niña than El Niño periods (Fig. 9j, n), but the signal is much weaker than in winter. During La Niña periods, there is a reduced probability of extreme heat over eastern Australia, which POAMA is able to capture (Fig. 5). The large observed signal of an increase in heat extremes over south-eastern Australia under El Niño periods in spring (Fig. 4b) translates into only very minor increases in forecast skill over the far south-east in El Niño periods (Fig. 9j) compared to neutral conditions (Fig. 9f). This is probably related to model deficiencies, since POAMA significantly underestimates the magnitude of the teleconnection (Fig. 4f). During both ENSO and neutral periods, there is high skill in the region of northern New South Wales and southern Queensland, suggesting that this skill is independent of ENSO. In a related paper examining intraseasonal drivers of heat extremes, we have found that negative SAM events contribute significantly to skill in that region during spring (Marshall et al. 2013).

In summer (DJF), there is no clear indication of increased skill of the forecasts during ENSO periods compared to neutral periods (Fig. 9c, g). The skill that is apparent over northern Australia appears to be independent of ENSO. Marshall et al. (2013) have shown that during negative SAM events there is increased skill for predicting heat extremes over this region in summer. In MAM, there is increased skill over southern Australia during ENSO periods, arising from skill during both El Niño and La Niña periods, but primarily the latter (Fig. 9). At these times, there is a tendency for a slightly reduced chance of extreme heat in observations and POAMA over most of southern Australia (Figs. 4, 5).

There are also regions in some of the seasons where the forecasts skill is higher for neutral ENSO cases (second row, Fig. 9) compared to extreme ENSO cases (top row, Fig. 9), most notably over parts of northern Australia in DJF and MAM and over south-central Australia in SON. The skill in these regions may be related to other large-scale drivers which play a role in the intraseasonal variability of Australia’s climate (Marshall et al. 2013). For instance, there is enhanced skill over northern Australia in DJF during negative SAM events compared to weak SAM events, and over northern Australia in MAM and southern Australia in SON during strong compared to weak MJO events (Marshall et al. 2013).

The skill of forecasts in association with the IOD is assessed for JJA and SON seasons when the IOD is active. Here we stratify forecast starts into those months when the IOD is strong or extreme (greater than the mean ± 1 SD) and those when it is weak or neutral (within the mean ± 0.5 SD). Given that the IOD is not independent of ENSO during the SON (e.g. Saji et al. 2006; Meyers et al. 2007; Cai et al. 2011), we compute the skill with respect to the IOD with and without the effect of ENSO for SON, following the method of Hudson et al. (2011b). In order to examine the impact of the IOD with the effect of ENSO “removed”, those cases that are associated with El Niño or La Niña events are removed. As before, the observed monthly (3-month running mean imposed) ENSO index from the U.S. National Weather Service CPC is used and warm and cold events are defined based on thresholds of ±0.5 °C. In the updated IOD classification, months are analysed only if the corresponding ENSO index falls within −0.5 and +0.5 °C (i.e. a neutral ENSO event).

In JJA, there is an indication of increased skill over small regions of northern and southern Australia in strong IOD events compared to neutral events (Fig. 10a, d). In SON, the effect of removing ENSO for the IOD stratification was to remove most of the significant skill over eastern Australia (Fig. 10b, c), where ENSO plays a major role. The skill when the IOD is strong (considering only ENSO–neutral months i.e. the effect of ENSO is “removed”) compared to the skill when the IOD is neutral appears to be enhanced over south-eastern and northern Australia (Fig. 10c, f). POAMA does a reasonable job of capturing the relationship between the IOD and extreme temperature in these regions at this time of year, although the strength of the relationship is generally underestimated (Figs. 6, 7).
Fig. 10

As for Fig. 9, but stratified by strong (i.e. positive and negative IOD events; top row) and weak or neutral (bottom row) phases of the IOD, for forecasts starting in winter (JJA) and spring (SON) seasons only. The third column shows the results for SON when cases associated with a positive ENSO are removed (see text for details)

5 Summary and conclusions

This study assesses the representation of the teleconnections between heat extremes and the large-scale climate drivers of ENSO and the IOD in the Australian Bureau of Meteorology’s intraseasonal–seasonal prediction model POAMA, and investigates the forecast skill of extreme weekly-averaged heat events during different phases of these drivers on the intraseasonal timescale. ENSO and the IOD are known to play a key role in modulating mean seasonal climate variability over Australia (e.g. Risbey et al. 2009), but less is known of their role in modulating extremes, and particularly extremes on the intraseasonal timescale. Recent work by Arblaster and Alexander (2012) and Min et al. (2013) has shown that both drivers have a significant influence on seasonal extremes over Australia. ENSO and the IOD are associated with slow variations in sea-surface temperatures (SSTs) and operate on timescales longer than intraseasonal. However, previous work looking at rainfall prediction found that these drivers have a clear impact on the intraseasonal timescale and should be considered as a source of predictability (Hudson et al. 2011b).

As has been found previously (Min et al. 2013), the ENSO and IOD teleconnection patterns found for extreme weekly-averaged Tmax resemble those found for mean Tmax. El Niño events are generally associated with an increased chance of heat extremes and La Niña events with a reduced chance of heat extremes. During El Niño events, the probability of having an extremely hot week is nearly doubled (compared to the mean expected probability of the event) over northern Australia in DJF and over southern Australia in SON. POAMA captures these signals, although much weaker than observed, particularly over southern Australia in SON. The ENSO link to southern Australia in spring is thought to come about via the Indian Ocean (Cai et al. 2011) and POAMA’s weakened response may be related to deficiencies associated with the teleconnection to the Indian Ocean. This may also explain POAMA’s underestimation of the signal of increased heat extremes over southern Australia in the case of positive IOD events in both JJA and SON. The teleconnection between extreme Tmax and the IOD is generally better represented for the negative than the positive IOD events. Negative IOD events are associated with reduced probabilities of extreme heat over much of Australia in winter and spring, apart from a small region of increased probabilities of heat over northern Australia.

In the second half of the paper, we examine the intraseasonal skill of POAMA’s extreme temperature forecasts using the SEDI, which is appropriate for assessing the skill of deterministic forecasts of rare events as since it is non-degenerative for rare events (Ferro and Stephenson 2011). This initial evaluation of the intraseasonal skill of forecasting heat extremes is promising. The skill is in general lower for forecasting extremes compared to forecasting events in the upper tercile, but the areas that are skilful for extreme temperatures generally correspond to those for less extreme temperatures. For weeks 2 and 3 of the forecast, POAMA is most skilful in MAM and JJA, with localised high skill over south-eastern Australia in SON. The highest skill occurs over northern Australia in MAM and JJA, and during JJA and SON over south-eastern and eastern Australia. Skill is, however, generally poor over western and southern Australia. This may be related in part to the aforementioned possible deficiencies in the model in capturing the teleconnection between the Indian Ocean and southern Australian climate.

Importantly, we have shown in this paper that there are windows of forecast opportunity related to the state of ENSO and the IOD, where the skill in predicting extreme temperatures over certain regions is increased. This skill is partly related to how well the model can simulate the teleconnections between the drivers and extreme heat over Australia. If, for example, we can improve the teleconnection between the Indian Ocean and southern Australia in the model, then prediction skill may be improved. When stratified into the different phases of the large-scale climate drivers, the clearest signal is when forecasts are initialised during ENSO periods in JJA, showing more skill over much of northern, eastern and western Australia when compared to forecasts initialised in neutral periods. The increased skill over northern Australia comes mainly from La Niña periods and over eastern and south-eastern Australia from El Niño periods. In these regions and at these times, there is a tendency for an increased chance of extreme heat, which POAMA faithfully represents. There is some indication of increased skill over eastern Australia in spring during ENSO periods compared to neutral periods, although given the strong observed relationship between El Niño and extreme heat during spring, we would have expected a larger positive impact on forecast skill. Model deficiencies are probably contributing to this weak response in skill, since POAMA significantly underestimates the magnitude of the relationship between extreme heat and El Niño in this season. Under strong IOD conditions, there are indications of statistically significant increases in skill over small areas of northern and southern Australia in both JJA and SON compared to weak IOD conditions. For SON, the assessment was also done with the influence of ENSO removed (due to the correlation between ENSO and IOD in this season), in order to isolate the influence of the Indian Ocean.

We conclude that POAMA shows good promise in the area of intraseasonal forecasting of extreme heat events across Australia. Predictions of extreme events on the intraseasonal to seasonal timescale are in their infancy worldwide (e.g. Hamilton et al. 2012; Becker et al. 2013) and remain a significant challenge. We have shown that the skill in predicting intraseasonal periods of extreme heat can depend on the state of ENSO and the IOD and is closely related to the ability of the model to capture the teleconnection with these large-scale climate drivers. In identifying windows of forecast opportunity (i.e. periods with increased skill for extreme heat events) this paper represents a first step for producing enhanced and actionable forecasts—particularly relevant for extreme events which have high societal impact. Our study contributes to the growing area of research aiming to fill the current prediction capability gap between weather forecasts and seasonal outlooks.



This work was supported by the Managing Climate Variability Program of the Grains Research and Development Corporation (GRDC). The authors would like to thank our colleagues Andrew Marshall, Harry Hendon, Matthew Wheeler and Beth Ebert, as well as two anonymous reviewers, for their insightful comments and advice in the preparation of this manuscript.


  1. Alexander LV, Arblaster JM (2009) Assessing trends in observed and modelled climate extremes over Australia in relation to future projections. Int J Climatol 29:417–435CrossRefGoogle Scholar
  2. Alexander MA, Blade I, Newman M, Lazante JR, Lau NC, Scott JD (2002) The atmospheric bridge: the influence of ENSO teleconnections on air–sea interaction over the global oceans. J Clim 15:2205–2231CrossRefGoogle Scholar
  3. Alexander LV, Zhang X, Peterson TC, Caesar J, Gleason B, Klein Tank AMG, Haylock M, Collins D, Trewin B, Rahimzadeh F, Tagipour A, Ambenje P, Rupa Kumar K, Revadekar J, Griffiths G (2006) Global observed changes in daily climate extremes of temperature and precipitation. J Geophys Res Atmos 111:D05109. doi:10.1029/2005JD006290 CrossRefGoogle Scholar
  4. Ansell T, Reason CJC, Meyers G (2000) Variability in the tropical southeast Indian Ocean and links with southeast Australian winter rainfall. Geophys Res Lett 27:3977–3980CrossRefGoogle Scholar
  5. Arblaster JM, Alexander LV (2012) The impact of the El Niño–Southern Oscillation on maximum temperature extremes. Geophys Res Lett 39:L20702. doi:10.1029/2012GL053409 CrossRefGoogle Scholar
  6. Becker EJ, van den Dool H, Peńa M (2013) Short-term climate extremes: prediction skill and predictability. J Clim 26:512–531CrossRefGoogle Scholar
  7. Bretherton CS, Widmann M, Dymnikov V, Wallace J, Bladé I (1999) The effective number of spatial degrees of freedom of a time-varying field. J Clim 12:1990–2009CrossRefGoogle Scholar
  8. Cai W, Hendon HH, Meyers G (2005) Indian Ocean dipole like variability in the CSIRO Mark3 climate model. J Clim 18:1449–1468CrossRefGoogle Scholar
  9. Cai W, Jones DA, Harle K, Cowan T, Power S, Smith I, Arblaster J, Abbs D (2007) Chapter 2: past climate change, climate change in Australia. CSIRO technical report, CSIRO, AustraliaGoogle Scholar
  10. Cai W, van Rensch P, Cowan T, Hendon HH (2011) Teleconnection pathways of ENSO and the IOD and the mechanisms for impacts on Australian rainfall. J Clim 24:3910–3923CrossRefGoogle Scholar
  11. Casati B, Wilson LJ, Stephenson DB, Nurmi P, Ghelli A, Pocernich M, Damrath U, Ebert EE, Brown BG, Mason S (2008) Forecast verification: current status and future directions. Meteorol Appl 15:3–18CrossRefGoogle Scholar
  12. Chambers LE, Griffiths GM (2008) The changing nature of temperature extremes in Australia and New Zealand. Aust Meteorol Mag 57:13–35Google Scholar
  13. CliMag (2009) Multi-week forecasts will help bridge the gap. In: CliMag (Managing Climate Variability Newsletter) 18: December. Available from the Grains Research and Development Corporation, AustraliaGoogle Scholar
  14. Ferro CAT, Stephenson DB (2011) Extremal dependence indices: improved verification measures for deterministic forecasts of rare binary events. Weather Forecast 26:699–713CrossRefGoogle Scholar
  15. Ferro CAT, Stephenson DB (2012) Deterministic forecasts of extreme events and warnings. In: Jolliffe IT, Stephenson DB (eds) Forecast verification: a practitioner’s guide in atmospheric science, 2nd edn. Wiley, ChichesterGoogle Scholar
  16. Fischer AS, Terray P, Guilyardi E, Gualdi S, Delecluse P (2005) Two independent triggers for the Indian Ocean dipole/zonal mode in a coupled GCM. J Clim 18:3349–3428CrossRefGoogle Scholar
  17. Fisher RA (1915) Frequency distribution of the values of the correlation coefficient in samples of an indefinitely large population. Biometrika 10:507–521Google Scholar
  18. Hamilton E, Eade R, Graham RJ, Scaife AA, Smith DM, Maidens A, MacLachlan C (2012) Forecasting the number of extreme daily events on seasonal timescales. J Geophys Res Atmos 117:D03114. doi:10.1029/2011JD016541 CrossRefGoogle Scholar
  19. Hogan RJ, Mason IB (2012) Deterministic forecasts of binary events. In: Jolliffe IT, Stephenson DB (eds) Forecast verification: a practitioner’s guide in atmospheric science, 2nd edn. Wiley, ChichesterGoogle Scholar
  20. Hudson D, Marshall AG, Alves O (2011a) Intraseasonal forecasting of the 2009 summer and winter Australian heat waves using POAMA. Weather Forecast 26:257–279CrossRefGoogle Scholar
  21. Hudson D, Alves O, Hendon HH, Marshall AG (2011b) Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia. Q J R Meteor Soc 137:673–689CrossRefGoogle Scholar
  22. Hudson D, Alves O, Hendon HH, Wang G (2011c) The impact of atmospheric initialisation on seasonal prediction of tropical Pacific SST. Clim Dyn 36:1155–1171CrossRefGoogle Scholar
  23. Hudson D, Marshall AG, Yin Y, Alves O, Hendon HH (2013) Improving intraseasonal prediction with a new ensemble generation strategy. Mon Weather Rev. doi:10.1175/MWR-D-13-00059.1 Google Scholar
  24. Jewson S, Caballero R (2003) The use of weather forecasts in the pricing of weather derivatives. Meteorol Appl 10:377–389CrossRefGoogle Scholar
  25. Jones DA, Trewin BC (2000) On the relationships between the El Niño–Southern Oscillation and Australian land surface temperature. Int J Climatol 20:697–719CrossRefGoogle Scholar
  26. Jones DA, Wang W, Fawcett R (2009) High-quality spatial climate data-sets for Australia. Aust Meteorol Oceanogr J 58:233–248Google Scholar
  27. Kharin VV, Zwiers FW, Zhang X, Hegerl GC (2007) Changes in temperature and precipitation extremes in the IPCC ensemble of global coupled model simulations. J Clim 20:1419–1444CrossRefGoogle Scholar
  28. Klein SA, Soden BJ, Lau NC (1999) Remote sea surface temperature variations during ENSO: evidence for a tropical atmospheric bridge. J Clim 12:917–932CrossRefGoogle Scholar
  29. Luo JJ, Mason S, Behera SK, Yamagata T (2008) Extended ENSO prediction using a fully coupled ocean–atmosphere model. J Clim 21:84–93CrossRefGoogle Scholar
  30. Manabe S, Holloway J (1975) The seasonal variation of the hydrological cycle as simulated by a global model of the atmosphere. J Geophys Res 80:1617–1649. doi:10.1029/JC080i012p01617 CrossRefGoogle Scholar
  31. Marshall AG, Hudson D, Wheeler MC, Hendon HH, Alves O (2011a) Assessing the simulation and prediction of rainfall associated with the MJO in the POAMA seasonal forecast system. Clim Dyn 37:2129–2141CrossRefGoogle Scholar
  32. Marshall AG, Hudson D, Wheeler MC, Hendon HH, Alves O (2011b) Simulation and prediction of the Southern Annular Mode and its influence on Australian intra-seasonal climate in POAMA. Clim Dyn 38:2483–2502CrossRefGoogle Scholar
  33. Marshall AG, Hudson D, Wheeler M, Alves O, Hendon HH, Pook MJ, Risbey JS (2013) Intra-seasonal drivers of extreme heat over Australia in observations and POAMA-2. Clim Dyn. doi:10.1007/s00382-013-2016-1
  34. Mason SJ, Graham NE (2002) Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: statistical significance and interpretation. Q J R Meteor Soc 128:2145–2166CrossRefGoogle Scholar
  35. Mason K, Nairn J, Herbst J, Felgate P (2010) Heatwave—the Adelaide experience. In: Proceedings of the 20th international symposium on the forensic sciences (ANZFSS), 5–9 September, Sydney, AustraliaGoogle Scholar
  36. Matsueda M (2011) Predictability of Euro-Russian blocking in summer of 2010. Geophys Res Lett 38:L06801. doi:10.1029/2010GL046557 Google Scholar
  37. Meyers G, McIntosh P, Pigot L, Pook M (2007) The years of El Niño, La Niña, and Interactions with the Tropical Indian Ocean. J Clim 20:2872–2880CrossRefGoogle Scholar
  38. Min S-K, Cai W, Whetton P (2013) Influence of climate variability on seasonal extremes over Australia. J Geophys Res Atmos 118:643–654. doi:10.1002/jgrd.50164 CrossRefGoogle Scholar
  39. Nairn J, Fawcett R, Ray D (2009) ‘Defining and predicting excessive heat events, a national system’. In: Proceedings of the CAWCR modelling workshop: understanding high impact weather, 30 November–2 December 2009, Melbourne, Australia, pp 83–86Google Scholar
  40. Nicholls N, Uotila P, Alexander L (2010) Synoptic influences on seasonal, interannual and decadal temperature variations in Melbourne, Australia. Int J Climatol 30:1372–1381Google Scholar
  41. Price Waterhouse Coopers (2011) Protecting human health and safety during severe and extreme heat events: a national framework. Commonwealth Government Report, AustraliaGoogle Scholar
  42. Rashid HA, Hendon HH, Wheeler MC, Alves O (2010) Predictability of the Madden–Julian Oscillation in the POAMA dynamical seasonal prediction system. Clim Dyn 36:649–661CrossRefGoogle Scholar
  43. Risbey JS, Pook MJ, McIntosh PC, Wheeler MC, Hendon HH (2009) On the remote drivers of rainfall variability in Australia. Mon Weather Rev 137:3233–3253CrossRefGoogle Scholar
  44. Roulston MS, Kaplan DT, Hardenberg J, Smith LA (2003) Using medium-range weather forecasts to improve the value of wind energy production. Renew Energy 28:585–602CrossRefGoogle Scholar
  45. Saji NH, Yamagata T (2003) Possible impacts of Indian Ocean dipole mode events on global climate. Clim Res 25:151–169CrossRefGoogle Scholar
  46. Saji NH, Goswami BN, Vinayachandran PN, Yamagata T (1999) A dipole mode in the tropical Indian Ocean. Nature 401:360–363Google Scholar
  47. Saji NH, Xie S, Yamagata T (2006) Tropical Indian Ocean Variability in the IPCC twentieth-century climate simulations. J Clim 19:4397–4417CrossRefGoogle Scholar
  48. Samuel JM, Verdon DC, Sivapalan M, Franks SW (2006) Influence of Indian Ocean sea surface temperature variability on southwest Western Australian winter rainfall. Water Resour Res 42:W08402Google Scholar
  49. Sankarasubramanian A, Lall U, Devineni N, Espinueva S (2009) The role of monthly updated climate forecasts in improving intraseasonal water allocation. J Appl Meteorol Clim 48:1464–1482CrossRefGoogle Scholar
  50. Schiller A, Godfrey J, McIntosh P, Meyers G (1997) A global ocean general circulation model climate variability studies. CSIRO marine research report no. 227, CSIRO, AustraliaGoogle Scholar
  51. Schiller A, Godfrey J, McIntosh P, Meyers G, Smith N, Alves O, Wang O, Fiedler R (2002) A new version of the Australian community ocean model for seasonal climate prediction. CSIRO marine research report no. 240, CSIRO, AustraliaGoogle Scholar
  52. Seneviratne SI, Nicholls N, Easterling D, Goodess CM, Kanae S, Kossin J, Luo Y, Marengo J, McInnes K, Rahimi M, Reichstein M, Sorteberg A, Vera C, Zhang X (2012) Changes in climate extremes and their impacts on the natural physical environment. In: Field CB, Barros V, Stocker TF, Qin D, Dokken DJ, Ebi KL, Mastrandrea MD, Mach J, Plattner G-K, Allen SK, Tignor M, Midgley PM (eds) Managing the risks of extreme events and disasters to advance climate change adaptation. A special report of working groups I and II of the intergovernmental panel on climate change (IPCC). Cambridge University Press, Cambridge, NY, pp 109–230Google Scholar
  53. Spiegel MR (1961) Schaum’s outline of theory and problems of Statistics. Schaum Publishing Company, New YorkGoogle Scholar
  54. State of Victoria (2009) January 2009 Heatwave in Victoria: an assessment of health impacts. Victoria health technical report, AustraliaGoogle Scholar
  55. Stephenson DB, Casati B, Ferro CAT, Wilson CA (2008) The extreme dependency score: a non-vanishing measure for forecasts of rare events. Meteorol Appl 15:41–50CrossRefGoogle Scholar
  56. Stockdale TN (1997) Coupled ocean–atmosphere forecasts in the presence of climate drift. Mon Weather Rev 125:809–818CrossRefGoogle Scholar
  57. Stockdale TN, Anderson DLT, Alves JOS, Balmaseda MA (1998) Global seasonal rainfall forecasts using a coupled ocean–atmosphere model. Nature 392:370–373CrossRefGoogle Scholar
  58. Taylor JW, Buizza R (2003) Using weather ensemble predictions in electricity demand forecasting. Int J Forecast 19:57–70CrossRefGoogle Scholar
  59. Tebaldi C, Hayhoe K, Arblaster JM, Meehl GA (2006) Going to the extremes: an intercomparison of model-simulated historical and future changes in extreme events. Clim Change 79:185–211CrossRefGoogle Scholar
  60. Trewin BC (2009) A new index for monitoring changes in heatwaves and extended cold spells. In: Proceedings of the 9th international conference on southern hemisphere meteorology and oceanography, 6–8 February 2009, Melbourne, AustraliaGoogle Scholar
  61. Trewin B, Vermont H (2010) Changes in the frequency of record temperatures in Australia, 1957–2009. Aust Meteorol Oceanogr J 60:113–119Google Scholar
  62. Valcke S, Terray L, Piacentini A (2000) Oasis 2.4, Ocean atmosphere sea ice soil: user’s guide. TR/CMGC/00/10, CERFACS, Toulouse, FranceGoogle Scholar
  63. Verdon DC, Franks SW (2005) Indian Ocean sea surface temperature variability and winter rainfall: Eastern Australia. Water Resour Res 41:W09413Google Scholar
  64. Vitart F (2005) Monthly forecast and the summer 2003 heat wave over Europe: a case study. Atmos Sci Lett 6:112–117CrossRefGoogle Scholar
  65. Wajsowicz RC (2007) Seasonal-to-interannual forecasting of tropical Indian Ocean sea surface temperature anomalies: potential predictability and barriers. J Clim 20:3320–3343CrossRefGoogle Scholar
  66. Wang G, Hudson D, Yin Y, Alves O, Hendon H, Langford S, Liu G, Tseitkin F (2011) POAMA-2 SST skill assessment and beyond. CAWCR Res Lett 6:40–46Google Scholar
  67. Webster PJ, Moore AM, Loschnigg JP, Leben RR (1999) Coupled ocean–atmosphere dynamics in the Indian Ocean during 1997–98. Nature 401:356–360CrossRefGoogle Scholar
  68. White CJ, McInnes KL, Cechet RP, Corney SP, Grose MR, Holz G, Katzfey JJ, Bindoff NL (2013) On regional dynamical downscaling for the assessment and projection of future temperature and precipitation extremes across Tasmania, Australia. Clim Dyn 41:3145–3165Google Scholar
  69. Wilks D (2006) Statistical methods in atmospheric sciences, 2nd edn. Academic Press, BurlingtonGoogle Scholar
  70. Xue Y, Balmaseda MA, Boyer T, Ferry N, Good S, Ishikawa I, Kumar A, Rienecker M, Rosati T, Yin Y (2012) A comparative analysis of upper-ocean heat content variability from an ensemble of operational ocean reanalyses. J Clim 25:6905–6929CrossRefGoogle Scholar
  71. Yin Y, Alves O, Oke PR (2011a) An ensemble ocean data assimilation system for seasonal prediction. Mon Weather Rev 139:786–808CrossRefGoogle Scholar
  72. Yin Y, Alves O, Hudson D (2011b) Coupled ensemble initialization for a new intraseasonal forecast system using POAMA at the Bureau of Meteorology. In: Proceedings of the international union of geodesy and geophysics conference (IUGG), 28 June–7 July, Melbourne, AustraliaGoogle Scholar
  73. Zeng L (2000) Weather derivatives and weather insurance: concept, application, and analysis. Bull Am Meteorol Soc 81:2075–2082CrossRefGoogle Scholar
  74. Zhao M, Hendon HH (2009) Representation and prediction of the Indian Ocean dipole in the POAMA seasonal forecast model. Q J R Meteor Soc 135:337–352CrossRefGoogle Scholar
  75. Zhong A, Alves O, Hendon H, Rikus L (2006) On aspects of the mean climatology and tropical interannual variability in the BMRC Atmospheric Model (BAM 3.0). BMRC research report no. 121, Bureau of Meteorology, AustraliaGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Christopher J. White
    • 1
  • Debra Hudson
    • 2
  • Oscar Alves
    • 2
  1. 1.Centre for Australian Weather and Climate Research (CAWCR)Bureau of MeteorologyHobartAustralia
  2. 2.Centre for Australian Weather and Climate Research (CAWCR)Bureau of MeteorologyMelbourneAustralia

Personalised recommendations