Skip to main content

Growing Algorithm of Laguerre Orthogonal Basis Neural Network with Weights Directly Determined

  • Conference paper
Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence (ICIC 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5227))

Included in the following conference series:

Abstract

Determination of appropriate neural-network (NN) structure is an important issue for a given learning or training task since the NN performance depends much on it. To remedy the weakness of conventional BP neural networks and learning algorithms, a new Laguerre orthogonal basis neural network is constructed. Based on this special structure, a weights-direct-determination method is derived, which could obtain the optimal weights of such a neural network directly (or to say, just in one step). Furthermore, a growing algorithm is presented for determining immediately the smallest number of hidden-layer neurons. Theoretical analysis and simulation results substantiate the efficacy of such a Laguerre-orthogonal-basis neural network and its growing algorithm based on the weights-direct-determination method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Zhang, Y., Wang, J.: Recurrent Neural Networks for Nonlinear Output Regulation. Automatica 37(8), 1161–1173 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  2. Zhang, Y., Ge, S.S., Lee, T.H.: A Unified Quadratic-Programming-Based Dynamical System Approach to Joint Torque Optimization of Physically Constrained Redundant Manipulators. IEEE Transactions on Systems, Man, and Cybernetics 34(5), 2126–2132 (2004)

    Article  Google Scholar 

  3. Zhang, Y., Jiang, D., Wang, J.: A Recurrent Neural Network for Solving Sylvester Equation with Time-Varying Coefficients. IEEE Transactions on Neural Networks 13(5), 1053–1063 (2002)

    Article  Google Scholar 

  4. Zhang, Y., Wang, J.: Global Exponential Stability of Recurrent Neural Networks for Synthesizing Linear Feedback Control Systems via Pole Assignment. IEEE Transactions on Neural Networks 13(3), 633–644 (2002)

    Article  Google Scholar 

  5. Zhang, Y., Ge, S.S.: Design and Analysis of a General Recurrent Neural Network Model for Time-Varying Matrix Inversion. IEEE Transactions on Neural Networks 16(6), 1477–1490 (2005)

    Article  Google Scholar 

  6. Lippmann, R.P.: An Introduction to Computing with Neural Nets. IEEE ASSP Magazine 4(2), 4–22 (1987)

    Article  Google Scholar 

  7. Carl, G.L.: Advances in Feedforward Neural Networks: Demystifying Knowledge Acquiring Black Boxes. IEEE Transactions on Knowledge and Data Engineering 8(2), 211–226 (1996)

    Article  Google Scholar 

  8. Daniel, S.Y., Zeng, X.Q.: Hidden Neuron Pruning for Multilayer Perceptrons Using a Sensitivity Measure. In: Proceedings of the First International Conference on Machine Learning and Cybernetics, pp. 1751–1757. IEEE Press, New York (2002)

    Google Scholar 

  9. Niu, B., Zhu, Y.L., Hu, K.Y., Li, S.F., He, X.X.: A Cooperative Evolutionary System for Designing Neural Networks. In: Huang, D.-S., Li, K., Irwin, G.W. (eds.) ICIC 2006. LNCS, vol. 4113, pp. 12–21. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  10. Wilson, D.R., Martinez, T.R.: The Need for Small Learning Rates on Large Problems. In: Proceedings of International Joint Conference on Neural Networks, vol. 1, pp. 115–119. IEEE Press, New York (2001)

    Google Scholar 

  11. Yu, X.H., Chen, G.A.: On the Local Minima Free Condition of Backpropagation Learning. IEEE Transactions on Neural Networks 6(5), 1300–1303 (1995)

    Article  Google Scholar 

  12. Lin, C.S.: Numerical Analysis (in Chinese). Science Press, Beijing (2006)

    Google Scholar 

  13. Kincaid, D., Cheney, W.: Numerical Analysis: Mathematics of Scientific Computing. China Machine Press, Beijing (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

De-Shuang Huang Donald C. Wunsch II Daniel S. Levine Kang-Hyun Jo

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, Y., Zhong, T., Li, W., Xiao, X., Yi, C. (2008). Growing Algorithm of Laguerre Orthogonal Basis Neural Network with Weights Directly Determined. In: Huang, DS., Wunsch, D.C., Levine, D.S., Jo, KH. (eds) Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence. ICIC 2008. Lecture Notes in Computer Science(), vol 5227. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85984-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-85984-0_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-85983-3

  • Online ISBN: 978-3-540-85984-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics