Skip to main content

Wave ensemble forecast system for tropical cyclones in the Australian region

Abstract

Forecasting of waves under extreme conditions such as tropical cyclones is vitally important for many offshore industries, but there remain many challenges. For Northwest Western Australia (NW WA), wave forecasts issued by the Australian Bureau of Meteorology have previously been limited to products from deterministic operational wave models forced by deterministic atmospheric models. The wave models are run over global (resolution 1/4) and regional (resolution 1/10) domains with forecast ranges of + 7 and + 3 day respectively. Because of this relatively coarse resolution (both in the wave models and in the forcing fields), the accuracy of these products is limited under tropical cyclone conditions. Given this limited accuracy, a new ensemble-based wave forecasting system for the NW WA region has been developed. To achieve this, a new dedicated 8-km resolution grid was nested in the global wave model. Over this grid, the wave model is forced with winds from a bias-corrected European Centre for Medium Range Weather Forecast atmospheric ensemble that comprises 51 ensemble members to take into account the uncertainties in location, intensity and structure of a tropical cyclone system. A unique technique is used to select restart files for each wave ensemble member. The system is designed to operate in real time during the cyclone season providing + 10-day forecasts. This paper will describe the wave forecast components of this system and present the verification metrics and skill for specific events.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

References

Download references

Acknowledgements

The authors gratefully acknowledge co-funding, observations and advice for this study from the Joint Industry Project group − Shell, Woodside, Chevron and INPEX. The authors would like to acknowledge Debbie Hudson, Jean Bidlot and Saima Aijaz for useful discussions. The authors would like thank two anonymous reviewers for their valuable comments that greatly enhanced the quality and clarity of the manuscript. This research project was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI), which is supported by the Australian Government. All data presented in this paper have been referenced in figures, tables, text and references.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Zieger.

Additional information

Responsible Editor: Val Swail

This article is part of the Topical Collection on the 15th International Workshop on Wave Hindcasting and Forecasting in Liverpool, UK, September 10–15, 2017

Appendix: Verification metrics and scores

Appendix: Verification metrics and scores

A.1 Ensemble skill and spread

An ensemble prediction system that consists of m ensemble members produces m individual forecasts y k for k ∈{1,...,m} at each forecast time step. The ensemble mean \(\bar {y}\) at time i, is calculated as follows:

$$ \bar{y}_{i} = \frac{1}{m} \sum\limits_{k = 1}^{m} y_{k} \:\:\:. $$
(A1)

Knowing the ensemble mean \(\bar {y}\), one can calculate the ensemble variance σ2 as average squared deviation away from the ensemble mean:

$$ {\sigma^{2}}_{i} = \frac{1}{m} \sum\limits_{k = 1}^{m} \left( y_{k}-\bar{y}_{i} \right)^{2} \:\:\:. $$
(A2)

The difference between ensemble mean and observation yields the ensemble skill. According to Fortin et al. (2014), the skill, S, is equivalent to the square root of the mean square error and is defined as follows:

$$ \textit{S} = \left( {\frac{1}{n} \sum\limits_{i = 1}^{n} \left( \bar{y}_{i}-\hat{y}_{i} \right)^{2}} \right)^{1/2} \:\:\:, $$
(A3)

where \(\bar {y}_{i}\) is the ensemble mean at time i, n is the number of observations and \(\hat {y}_{i}\) represents the observed value that is significant wave height, wind speed or peak swell period. The mean spread \(\bar {\sigma }\) of an ensemble is the square root of the mean ensemble variance (Fortin et al. 2014) and is defined as follows:

$$ \bar{\sigma} = \left( \frac{1}{n} \sum\limits_{i = 1}^{n} {\sigma^{2}}_{i} \right)^{1/2} \:\:\:, $$
(A4)

where σ2 i is the ensemble variance from Eq. A2 at time i. Fortin et al. (2014) found that there exist a considerable number of papers that use an incorrect form of the mean ensemble spread \(\bar {\sigma }\) by simply using the arithmetic mean of all ensemble standard deviations σ i , that is the square root of Eq. A2. This incorrect form will lead to smaller values of \(\bar {\sigma }\) compared to Eq. A4, unless the spread is constant for all forecast times.

A.2 Brier score

The Brier score (B) can be considered as the mean-square error in probability space (Brier 1950) and is defined as follows:

$$ \textit{B} = \frac{1}{n} \sum\limits_{i = 1}^{n} (p_{i}-o_{i})^{2} \:\:\:, $$
(A5)

where n is the number of forecast and observation pairs, p is the forecast probability and o is the occurrence of an event. When an event did occur o i = 1, and when it did not occur o i = 0.

A.3 Delta score

Hamill (2001) referred to the χ2 test to test for the flatness of a rank histogram. An alternative test is the δ-score introduced by Candille and Talagrand (2005). Let n be the number of observations and m be the ensemble size. If the verifying observation is occupying each rank equally, then each of the kth histogram intervals r k would have the expected value E[r k ] = n/(m + 1). Therefore, if the entire ensemble system is reliable one would expect a score δ0 = m E[r k ]. Candille and Talagrand (2005) used the ratio of the cumulative square difference between r k and E[r k ] to δ0 as a measure of flatness of a rank histogram:

$$ \delta = \frac{m + 1}{n \: m} \:\: \sum\limits_{k = 1}^{m + 1} \left( r_{k}-\frac{n}{m + 1} \right)^{2} \:\:. $$
(A6)

A δ score that is significantly less than 1 is an indication that the ensemble is not independent and that the verifying observation occupies ensemble ranks that were less frequently occupied in previous forecasts, which is a scenario that is highly unlikely. On the other hand, a value significantly greater than 1 suggests that the EPS is unreliable (Candille and Talagrand 2005).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zieger, S., Greenslade, D. & Kepert, J.D. Wave ensemble forecast system for tropical cyclones in the Australian region. Ocean Dynamics 68, 603–625 (2018). https://doi.org/10.1007/s10236-018-1145-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10236-018-1145-9

Keywords

  • Wave ensemble
  • Probabilistic wave forecast
  • Tropical cyclones