Skip to main content

Advertisement

Log in

A newly combination model based on data denoising strategy and advanced optimization algorithm for short-term wind speed prediction

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Accurate implementation of short-term wind speed prediction can not only improve the efficiency of wind power generation, but also relieve the pressure on the power system and improve the stability of the grid. As is known to all, the existing wind speed prediction systems can improve the performance of the prediction in some sense, but at the same time they have some inherent shortcomings, just like forecasting accuracy is not high or indicators are difficult to obtain. In this paper, based on 10-min wind speed data from a wind farm, a new combination model is developed, which consists of three parts: data noise reduction techniques, five artificial single-model prediction algorithms, and multi-objective optimization algorithms. Through detailed and complete experiments and tests, the results demonstrate that the combination model has better performance than other models, solving the problem of instability of traditional forecasting models and filling the gap of low-prediction short-term wind speed forecasting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Abbreviations

ALO:

Ant lion optimization

BPNN:

Back propagation neural network

ELM:

Extreme learning machine

WNN:

Wavelet neural network

SVM:

Support vector machine

EMD:

Empirical mode decomposition

MAE:

Mean absolute error

MSE:

Mean squared error

SSE:

Sum of square error

RF:

Random forest

ENN:

Elman neural network

MOALO:

Multi-objective ant lion optimization

MODA:

Multi-objective dragonfly algorithm

VMD:

Variational mode decomposition

MOGOA:

Multi-objective grasshopper optimization algorithm

ARIMA:

Autoregressive integrated moving average model

EEMD:

Ensemble empirical mode decomposition

MAPE:

Mean absolute percentage error

FNN:

Fully connected neural network

LSTM:

Long short-term memory

FL:

Fully logic

WDD:

Wavelet domain denoising

References

Download references

Funding

This work was supported by the National Natural Science Foundation of China (Grant number 71671029).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianzhou Wang.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Appendix A

See Table 3.

Table 3 Summary of different wind speed prediction models

1.2 Appendix B

See Table 4.

Table 4 Summary of different data denoising methods and optimization algorithms

1.3 Appendix C

The pseudo code of VMD

Algorithm Pseudo Code of the VMD Denoising

1.

Choose IMFs and penalty coefficient \(\alpha\). Initialize \(\{ \hat{\beta }_{m}^{1} \}\), \(\{ \hat{\gamma }_{m}^{1} \}\), \(\hat{\lambda }\), \(n \leftarrow 0\)

2.

Repeat

3.

Start computing \(n \leftarrow n + 1\)

4.

FOR(\(m = 1:m\))DO

5.

/* Deconstructing values of signals of several components in some forms */

6.

Update \(\hat{\beta }_{m}\) for every \(s \ge 0\)

 

\(\hat{\beta }_{m}^{n + 1} (s) = {{\left[ {\hat{\gamma }(s) - \sum\limits_{i < m} {\hat{u}_{m}^{n + 1} (s) - } \sum\limits_{i > m} {\hat{\beta }_{m}^{n} (s) + \frac{{\hat{\lambda }^{n} (s)}}{2}} } \right]} \mathord{\left/ {\vphantom {{\left[ {\hat{\gamma }(s) - \sum\limits_{i < m} {\hat{u}_{m}^{n + 1} (s) - } \sum\limits_{i > m} {\hat{\beta }_{m}^{n} (s) + \frac{{\hat{\lambda }^{n} (s)}}{2}} } \right]} {\left[ {1 + 2\alpha (\gamma - \gamma_{m}^{n} )^{2} } \right]}}} \right. \kern-\nulldelimiterspace} {\left[ {1 + 2\alpha (\gamma - \gamma_{m}^{n} )^{2} } \right]}}\)

7.

Update central frequency \(\hat{\gamma }_{m}\): \(\gamma_{m}^{n + 1} = {{\int_{0}^{\infty } {\gamma \left| {\hat{\beta }_{m} (\gamma )} \right|^{2} } d\gamma } \mathord{\left/ {\vphantom {{\int_{0}^{\infty } {\gamma \left| {\hat{\beta }_{m} (\gamma )} \right|^{2} } d\gamma } {\int_{0}^{\infty } {\left| {\hat{\beta }_{m} (\gamma )} \right|^{2} } d\gamma }}} \right. \kern-\nulldelimiterspace} {\int_{0}^{\infty } {\left| {\hat{\beta }_{m} (\gamma )} \right|^{2} } d\gamma }}\)

8.

/*Decomposed mode, that is IMFs, must be the most advanced near the

 

center*/

9.

END FOR

10.

Use dual ascent for every \(s \ge 0\): \(\hat{\lambda }^{n + 1} (s) = \hat{\lambda }^{n} (s) + \tau \left( {\hat{\gamma }(s) - \sum\limits_{m} {\hat{\beta }_{m}^{n + 1} (s)} } \right)\)

11.

/* Until the parameter \(f\) to be estimated converges to the given convergence tolerance standard */

12.

UNTIL convergence: \({{\sum\limits_{m} {\left\| {\hat{\beta }_{m}^{n + 1} - \hat{\beta }_{m}^{n} } \right\|_{2}^{2} } } \mathord{\left/ {\vphantom {{\sum\limits_{m} {\left\| {\hat{\beta }_{m}^{n + 1} - \hat{\beta }_{m}^{n} } \right\|_{2}^{2} } } {\left\| {\hat{\beta }_{m}^{n} } \right\|_{2}^{2} }}} \right. \kern-\nulldelimiterspace} {\left\| {\hat{\beta }_{m}^{n} } \right\|_{2}^{2} }} < \varepsilon\)

1.4 Appendix D

The pseudo code of MOALO

Algorithm Pseudo of the MOALO

Objective Functions:

\(\min \left\{ \begin{gathered} function_{1} (f) = \frac{1}{M}\sum\limits_{l = 1}^{N} {\left| {\hat{f}_{l} - f_{l} } \right|} \hfill \\ function_{1} (f) = std(\hat{f}_{l} - f_{l} ),l = 1,2,...M \hfill \\ \end{gathered} \right.\)

1.

/*Set up the MOALO’s parameters. */

2.

WHILE the final conditions are not met

3.

FOR EACH ANT

4.

/*Choose a random antlion from the archive. */

5.

/*Choose the applying Roulette tools from the archive. */

6.

/*Update \(e^{p}\) and \(f^{p}\) according to two equations as follows. */

7.

\(e^{p} = {{e^{p} } \mathord{\left/ {\vphantom {{e^{p} } r}} \right. \kern-\nulldelimiterspace} r}\)

8.

\(f^{p} = {{f^{p} } \mathord{\left/ {\vphantom {{f^{p} } r}} \right. \kern-\nulldelimiterspace} r}\)

9.

/*Start a random walk and normalize it by two equations as

 

follows. */

10.

\(X(k) = \left[ {0,cumsum(2r(k_{1} ) - 1),...,cumsum(2r(k_{m} - 1))} \right]\)

11.

\(X_{n}^{k} = {{(X_{n}^{k} - a_{n} )(d_{n}^{k} - c_{n}^{k} )} \mathord{\left/ {\vphantom {{(X_{n}^{k} - a_{n} )(d_{n}^{k} - c_{n}^{k} )} {(b_{n} - a_{n} ) + c_{n} }}} \right. \kern-\nulldelimiterspace} {(b_{n} - a_{n} ) + c_{n} }}\)

12.

/*Update the actual position of each ant

13.

\(Ant_{l}^{k} = {{(R_{A}^{k} + R_{E}^{k} )} \mathord{\left/ {\vphantom {{(R_{A}^{k} + R_{E}^{k} )} 2}} \right. \kern-\nulldelimiterspace} 2}\)

14.

END FOR

15.

/*Compute the objective optimization values of each ant. */

16.

/*Find out the non-dominated methods. */

17.

/*Update the archive with respect to get better non-dominated methods.

18.

IF the archive is ample DO

19.

/*Strike out some worse methods from the archive to maintain the

 

new methods. */

20.

Applying Roulette tools and \(P_{m} = N_{m} /c*(c > 1)\)

21.

END IF

22.

IF any newly extra methods archived are outside the boundary DO

23.

/*Update the boundaries covering the new solutions. */

24.

END IF

25.

END WHILE

26.

RETURN archive

27.

X*=Choose Leader(archive)

1.5 Appendix E

See Table 5.

Table 5 Parameter details of models and algorithms in this paper

1.6 Appendix F

See Table 6.

Table 6 Forecasting error comparison between combination model and other models based on VMD

1.7 Appendix G

See Table 7.

Table 7 Forecasting error between combination model and other optimization models

1.8 Appendix H

See Table 8.

Table 8 Forecasting error between combination model and two deep learning models

1.9 Appendix I

See Table 9.

Table 9 Forecasting error in the different ratio of the training set to the test set

1.10 Appendix J

See Table 10.

Table 10 DM test

1.11 Appendix K

See Table 11.

Table 11 Forecasting effectiveness

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lv, M., Wang, J., Niu, X. et al. A newly combination model based on data denoising strategy and advanced optimization algorithm for short-term wind speed prediction. J Ambient Intell Human Comput 14, 8271–8290 (2023). https://doi.org/10.1007/s12652-021-03595-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-021-03595-x

Keywords

Navigation