Skip to main content

A Training Method for Discrete Multilayer Neural Networks

  • Chapter
Book cover Mathematics of Neural Networks

Part of the book series: Operations Research/Computer Science Interfaces Series ((ORCS,volume 8))

Abstract

In this contribution a new training method is proposed for neural networks that are based on neurons whose output can be in a particular state. This method minimises the well known least square criterion by using information concerning only the signs of the error function and inaccurate gradient values. The algorithm is based on a modified one-dimensional bisection method and it treats supervised training in networks of neurons with discrete output states as a problem of minimisation based on imprecise values.

Subject classification: AMS(MOS) 65K10, 49D10, 68T05, 68G05.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. W. McCullough, W. H. Pitts, A logical calculus of the ideas imminent in nervous activity, Bulletin Mathematical Biophysics, Vol. 5 (1943), pp115–133.

    Article  Google Scholar 

  2. S. E. Hampson, D. J. Volper, Representing and learning boolean functions of multivalued features, IEEE Trans. Systems, Man & Cybernetics, Vol. 20 (1990), pp67–80.

    Article  MathSciNet  MATH  Google Scholar 

  3. G. J. Gibson, F. N. Cowan, On the decision regions of multi-layer perceptrons, Proc. IEEE, Vol. 78 (1990), pp1590–1594.

    Article  Google Scholar 

  4. D. E. Rumelhart and J. L. McClelland eds., Parallel Distributed Processing: Explorations in the Micro structure of Cognition, Vol. 1, MIT Press (1986), pp318–362.

    Google Scholar 

  5. B. Widrow, R. Winter, Neural nets for adaptive filtering and adaptive pattern recognition, IEEE Computer (March 1988), pp25–39.

    Google Scholar 

  6. D. J. Tom, Training binary node feed forward neural networks by back-propagation of error, Electronics Letters, Vol. 26 (1990), pp1745–1746.

    Article  Google Scholar 

  7. E. M. Gorwin, A. M. Logar, W. J. B. Oldham, An iterative method for training multilayer networks with threshold functions, IEEE Trans. Neural Networks, Vol. 5 (1994), pp507–508.

    Article  Google Scholar 

  8. R. Goodman, Z. Zeng, A learning algorithm for multi-layer perceptrons with hard-limiting threshold units, in: Proc. IEEE Neural Networks for Signal Processing (1994), pp219–228.

    Google Scholar 

  9. Z. Zeng, R. Goodman, P. Smyth, Learning finite state machines with self-clustering recurrent networks, Neural Computation, Vol. 5 (1993), pp976–990.

    Article  Google Scholar 

  10. Z. Zeng, R. Goodman, P. Smyth, Discrete recurrent neural networks for grammatical inference, IEEE Trans. Neural Networks, Vol. 5 (1994), pp320–330.

    Article  Google Scholar 

  11. M. N. Vrahatis, G. S. Androulakis, G. E. Manoussakis, A new unconstrained optimization method for imprecise function and gradient values, J. Mathematical Analysis & Applications, Vol. 197 (1996), pp586–607.

    Article  MathSciNet  MATH  Google Scholar 

  12. M. N. Vrahatis, Solving systems of non-linear equations using the non zero value of the topological degree, ACM Trans. Math. Software, Vol. 14 (1988), pp312–329.

    Article  MathSciNet  MATH  Google Scholar 

  13. M. N. Vrahatis, CHABIS: A mathematical software package for locating and evaluating roots of systems of non-linear equations, ACM Trans. Math. Software, Vol. 14 (1988), pp330–336.

    Article  MathSciNet  MATH  Google Scholar 

  14. H. J. Sira-Ramirez, S. H. Zak, The adaptation of perceptrons with applications to inverse dynamics identification of unknown dynamic systems, IEEE Trans. Systems, Man &: Cybernetics, Vol. 21 (1991), pp634–643.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer Science+Business Media New York

About this chapter

Cite this chapter

Magoulas, G.D., Vrahatis, M.N., Grapsa, T.N., Androulakis, G.S. (1997). A Training Method for Discrete Multilayer Neural Networks. In: Ellacott, S.W., Mason, J.C., Anderson, I.J. (eds) Mathematics of Neural Networks. Operations Research/Computer Science Interfaces Series, vol 8. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-6099-9_42

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-6099-9_42

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7794-8

  • Online ISBN: 978-1-4615-6099-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics