Skip to main content

Optimal Training Sequences for Locally Recurrent Neural Networks

  • Conference paper
Artificial Neural Networks – ICANN 2009 (ICANN 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5768))

Included in the following conference series:

Abstract

The problem of determining an optimal training schedule for a locally recurrent neural network is discussed. Specifically, the proper choice of the most informative measurement data guaranteeing the reliable prediction of the neural network response is considered. Based on a scalar measure of the performance defined on the Fisher information matrix related to the network parameters, the problem was formulated in terms of optimal experimental design. Then, its solution can be readily achieved via the adaptation of effective numerical algorithms based on the convex optimization theory. Finally, some illustrative experiments are provided to verify the presented approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Gupta, M.M., Jin, L., Homma, N.: Static and Dynamic Neural Networks. From Fundamentals to Advanced Theory. John Wiley & Sons, New Jersey (2003)

    Book  Google Scholar 

  2. Korbicz, J., Kościelny, J., Kowalczuk, Z., Cholewa, W.: Fault Diagnosis. Models, Artificial Intelligence, Applications. Springer, Heidelberg (2004)

    MATH  Google Scholar 

  3. van de Wal, M., de Jager, B.: A review of methods for input/output selection. Automatica 37, 487–510 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  4. Patan, K.: Artificial Neural Networks for the Modelling and Fault Diagnosis of Technical Processes. LNCIS. Springer, Berlin (2008)

    Google Scholar 

  5. Fukumizu, K.: Statistical active learning in multilayer perceptrons. IEEE Transactions on Neural Networks 11, 17–26 (2000)

    Article  Google Scholar 

  6. Witczak, M.: Toward the training of feed-forward neural networks with the D-optimum input sequence. IEEE Transactions on Neural Networks 17, 357–373 (2006)

    Article  Google Scholar 

  7. Patan, K., Patan, M.: Selection of training sequences for locally recurrent neural network training. In: Malinowski, K., Rutkowski, L. (eds.) Recent Advances in Control and Automation, pp. 252–262. Academic Publishing House, EXIT, Warsaw (2008)

    Google Scholar 

  8. Fedorov, V.V., Hackl, P.: Model-Oriented Design of Experiments. Lecture Notes in Statistics. Springer, New York (1997)

    Book  MATH  Google Scholar 

  9. Tsoi, A.C., Back, A.D.: Locally recurrent globally feedforward networks: A critical review of architectures. IEEE Transactions on Neural Networks 5, 229–239 (1994)

    Article  Google Scholar 

  10. Marcu, T., Mirea, L., Frank, P.M.: Development of dynamical neural networks with application to observer based fault detection and isolation. International Journal of Applied Mathematics and Computer Science 9(3), 547–570 (1999)

    MATH  Google Scholar 

  11. Patan, K.: Stability analysis and the stabilization of a class of discrete-time dynamic neural networks. IEEE Transactions on Neural Networks 18, 660–673 (2007)

    Article  Google Scholar 

  12. Atkinson, A.C., Donev, A.N.: Optimum Experimental Designs. Clarendon Press, Oxford (1992)

    MATH  Google Scholar 

  13. Walter, E., Pronzato, L.: Identification of Parametric Models from Experimental Data. Springer, London (1997)

    MATH  Google Scholar 

  14. Patan, M.: Optimal Observation Strategies for Parameter Estimation of Distributed Systems. Lecture Notes in Control and Computer Science, vol. 5. Zielona Góra University Press, Zielona Góra (2004)

    MATH  Google Scholar 

  15. Uciński, D.: Optimal selection of measurement locations for parameter estimation in distributed processes. International Journal of Applied Mathematics and Computer Science 10(2), 357–379 (2000)

    MathSciNet  MATH  Google Scholar 

  16. Rafajłowicz, E.: Optimum choice of moving sensor trajectories for distributed parameter system identification. International Journal of Control 43(5), 1441–1451 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  17. Uciński, D.: Optimal Measurement Methods for Distributed Parameter System Identification. CRC Press, Boca Raton (2005)

    MATH  Google Scholar 

  18. Kiefer, J., Wolfowitz, J.: Optimum designs in regression problems. The Annals of Mathematical Statistics 30, 271–294 (1959)

    Article  MathSciNet  MATH  Google Scholar 

  19. Pázman, A.: Foundations of Optimum Experimental Design. Mathematics and Its Applications. D. Reidel Publishing Company, Dordrecht (1986)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Patan, K., Patan, M. (2009). Optimal Training Sequences for Locally Recurrent Neural Networks. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04274-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04274-4_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04273-7

  • Online ISBN: 978-3-642-04274-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics