Skip to main content

Improving Randomized Learning of Feedforward Neural Networks by Appropriate Generation of Random Parameters

  • Conference paper
  • First Online:
Book cover Advances in Computational Intelligence (IWANN 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11506))

Included in the following conference series:

Abstract

In this work, a method of random parameters generation for randomized learning of a single-hidden-layer feedforward neural network is proposed. The method firstly, randomly selects the slope angles of the hidden neurons activation functions from an interval adjusted to the target function, then randomly rotates the activation functions, and finally distributes them across the input space. For complex target functions the proposed method gives better results than the approach commonly used in practice, where the random parameters are selected from the fixed interval. This is because it introduces the steepest fragments of the activation functions into the input hypercube, avoiding their saturation fragments.

Supported by Grant 2017/27/B/ST6/01804 from the National Science Centre, Poland.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Principe, J., Chen, B.: Universal approximation with convex optimization: gimmick or reality? IEEE Comput. Intell. Mag. 10, 68–77 (2015)

    Article  Google Scholar 

  2. Igelnik, B., Pao, Y.-H.: Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Netw. 6(6), 1320–1329 (1995)

    Article  Google Scholar 

  3. Husmeier, D.: Random vector functional link (RVFL) networks. Neural Networks for Conditional Probability Estimation: Forecasting Beyond Point Predictions, chap. 6, pp. 87–97. Springer, London (1999). https://doi.org/10.1007/978-1-4471-0847-4_6

    Chapter  Google Scholar 

  4. Pao, Y.-H., Park, G.H., Sobajic, D.J.: Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2), 163–180 (1994)

    Article  Google Scholar 

  5. Li, M., Wang, D.: Insights into randomized algorithms for neural networks: practical issues and common pitfalls. Inf. Sci. 382–383, 170–178 (2017)

    Article  Google Scholar 

  6. Zhang, L., Suganthan, P.: A comprehensive evaluation of random vector functional link networks. Inf. Sci. 367, 1094–1105 (2016)

    Article  Google Scholar 

  7. Wang, D., Li, M.: Stochastic configuration networks: fundamentals and algorithms. IEEE Trans. Cybern. 47(10), 3466–3479 (2017)

    Article  Google Scholar 

  8. Gorban, A.N., Tyukin, I.Y., Prokhorov, D.V., Sofeikov, K.I.: Approximation with random bases: pro- et contra. Inf. Sci. 364, 146–155 (2016)

    Google Scholar 

  9. Zhang, L., Suganthan, P.: A Survey of randomized algorithms for training neural networks. Inf. Sci. 364, 146–155 (2016)

    Article  Google Scholar 

  10. Weipeng, C., Wang, X., Ming, Z., Gao, J.: A review on neural networks with random weights. Neurocomputing 275, 278–287 (2018)

    Article  Google Scholar 

  11. Dudek, G.: Generating random weights and biases in feedforward neural networks with random hidden nodes. Inf. Sci. 481, 33–56 (2019)

    Article  MathSciNet  Google Scholar 

  12. Scardapane, S., Wang, D.: Randomness in neural networks: an overview. WIREs Data Min. Knowl. Discov. 7(2), e1200 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Grzegorz Dudek .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dudek, G. (2019). Improving Randomized Learning of Feedforward Neural Networks by Appropriate Generation of Random Parameters. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2019. Lecture Notes in Computer Science(), vol 11506. Springer, Cham. https://doi.org/10.1007/978-3-030-20521-8_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-20521-8_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-20520-1

  • Online ISBN: 978-3-030-20521-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics