Skip to main content

Gradient Boosting with Extreme Learning Machines for the Optimization of Nonlinear Functionals

  • Chapter
  • First Online:
Advances in Optimization and Decision Science for Society, Services and Enterprises

Part of the book series: AIRO Springer Series ((AIROSS,volume 3))

Abstract

In this paper we investigate the use of the Extreme Learning Machine (ELM) paradigm for the approximate minimization of a general class of functionals which arise routinely in operations research, optimal control and statistics problems. The ELM and, in general, neural networks with random hidden weights, have proved to be very efficient tools for the optimization of costs typical of machine learning problems, due to the possibility of computing the optimal outer weights in closed form. Yet, this feature is possible only when the cost is a sum of squared terms, as in regression, while more general cost functionals must be addressed with other methods. Here we focus on the gradient boosting technique combined with the ELM to address important instances of optimization problems such as optimal control of a complex system, multistage optimization and maximum likelihood estimation. Through the application of a simple gradient boosting descent algorithm, we show how it is possible to take advantage of the accuracy and efficiency of the ELM for the approximate solution of this wide family of optimization problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 59.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We assume that the minimum exists. Otherwise, the problem can be redefined in terms of ε-optimal solutions.

References

  1. Huang, G., Huang, G.-B., Song, S., You, K.: Trends in extreme learning machines: a review. Neural Netw. 61, 32–48 (2015)

    Article  Google Scholar 

  2. Huang, G.-B., Cambria, E., Toh, K.-A., Widrow, B., Xu, Z.: New trends of learning in computational intelligence. IEEE Comput. Intell. Mag. 10, 16–17 (2015)

    Article  Google Scholar 

  3. Huang, G.-B., Chen, L., Siew, C.-K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879–892 (2006)

    Article  Google Scholar 

  4. Cervellera, C., Macciò, D.: Low-discrepancy points for deterministic assignment of hidden weights in extreme learning machines. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 891–896 (2016)

    Article  MathSciNet  Google Scholar 

  5. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, New York (2009)

    Google Scholar 

  6. Mason, L., Baxter, J., Bartlett, P., Frean, M.: Boosting algorithms as gradient descent. In: Solla, S.A., Leen, T.K., Müller, K. (eds.) Advances in Neural Information Processing Systems, vol. 12, pp. 512–518. MIT Press, Cambridge (1999)

    Google Scholar 

  7. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)

    Article  MathSciNet  Google Scholar 

  8. Guo, H., Wang, J., Ao, W., He, Y.: SGB-ELM: an advanced stochastic gradient boosting-based ensemble scheme for extreme learning machine. Comput. Intell. Neurosci. 2018, 1–14, (2018)

    Google Scholar 

  9. Gelfand, I.M., Fomin, S.V.: Calculus of Variations. Prentice Hall, New Jersey (1963)

    MATH  Google Scholar 

  10. Zoppoli, R., Sanguineti, M., Parisini, T.: Approximating networks and extended Ritz method for the solution of functional optimization problems. J. Optim. Theory Appl. 112, 403–439 (2002)

    Article  MathSciNet  Google Scholar 

  11. Cervellera, C., Macciò, D., Muselli, M.: Functional optimization through semi-local approximate minimization. Oper. Res. 58(5), 1491–1504 (2010)

    Article  MathSciNet  Google Scholar 

  12. Alessandri, A., Cervellera, C., Macciò, D., Sanguineti, M.: Optimization based on quasi-Monte Carlo sampling to design state estimators for non-linear systems. Optimization 69(7), 963–984 (2010)

    Article  MathSciNet  Google Scholar 

  13. Gnecco, G., Sanguineti, M.: Neural approximations in discounted infinite-horizon stochastic optimal control problems. Eng. Appl. Artif. Intell. 74, 294–302 (2018)

    Article  Google Scholar 

  14. Macciò, D., Cervellera, C.: Local kernel learning for data-driven control of complex systems. Expert Syst. Appl. 39(18), 13399–13408 (2012)

    Article  Google Scholar 

  15. Fan, H., Tarun, P.K., Chen, V.: Adaptive value function approximation for continuous-state stochastic dynamic programming. Comput. Oper. Res. 40(4), 1076–1084 (2013)

    Article  MathSciNet  Google Scholar 

  16. Cervellera, C., Macciò, D.: F-discrepancy for efficient sampling in approximate dynamic programming. IEEE Trans. Cybern. 46, 1628–1639 (2016)

    Article  Google Scholar 

  17. Bertsekas, D.: Dynamic Programming and Optimal Control, vol. I, 2nd edn. Athena Scientific, Belmont (2000)

    Google Scholar 

  18. Cervellera, C., Macciò, D.: A novel approach for sampling in approximate dynamic programming based on F-discrepancy. IEEE Trans. Cybern. 47(10), 3355–3366 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Danilo Macciò .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Cervellera, C., Macciò, D. (2019). Gradient Boosting with Extreme Learning Machines for the Optimization of Nonlinear Functionals. In: Paolucci, M., Sciomachen, A., Uberti, P. (eds) Advances in Optimization and Decision Science for Society, Services and Enterprises. AIRO Springer Series, vol 3. Springer, Cham. https://doi.org/10.1007/978-3-030-34960-8_7

Download citation

Publish with us

Policies and ethics