Abstract
In this contribution a new training method is proposed for neural networks that are based on neurons whose output can be in a particular state. This method minimises the well known least square criterion by using information concerning only the signs of the error function and inaccurate gradient values. The algorithm is based on a modified one-dimensional bisection method and it treats supervised training in networks of neurons with discrete output states as a problem of minimisation based on imprecise values.
Subject classification: AMS(MOS) 65K10, 49D10, 68T05, 68G05.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
W. McCullough, W. H. Pitts, A logical calculus of the ideas imminent in nervous activity, Bulletin Mathematical Biophysics, Vol. 5 (1943), pp115–133.
S. E. Hampson, D. J. Volper, Representing and learning boolean functions of multivalued features, IEEE Trans. Systems, Man & Cybernetics, Vol. 20 (1990), pp67–80.
G. J. Gibson, F. N. Cowan, On the decision regions of multi-layer perceptrons, Proc. IEEE, Vol. 78 (1990), pp1590–1594.
D. E. Rumelhart and J. L. McClelland eds., Parallel Distributed Processing: Explorations in the Micro structure of Cognition, Vol. 1, MIT Press (1986), pp318–362.
B. Widrow, R. Winter, Neural nets for adaptive filtering and adaptive pattern recognition, IEEE Computer (March 1988), pp25–39.
D. J. Tom, Training binary node feed forward neural networks by back-propagation of error, Electronics Letters, Vol. 26 (1990), pp1745–1746.
E. M. Gorwin, A. M. Logar, W. J. B. Oldham, An iterative method for training multilayer networks with threshold functions, IEEE Trans. Neural Networks, Vol. 5 (1994), pp507–508.
R. Goodman, Z. Zeng, A learning algorithm for multi-layer perceptrons with hard-limiting threshold units, in: Proc. IEEE Neural Networks for Signal Processing (1994), pp219–228.
Z. Zeng, R. Goodman, P. Smyth, Learning finite state machines with self-clustering recurrent networks, Neural Computation, Vol. 5 (1993), pp976–990.
Z. Zeng, R. Goodman, P. Smyth, Discrete recurrent neural networks for grammatical inference, IEEE Trans. Neural Networks, Vol. 5 (1994), pp320–330.
M. N. Vrahatis, G. S. Androulakis, G. E. Manoussakis, A new unconstrained optimization method for imprecise function and gradient values, J. Mathematical Analysis & Applications, Vol. 197 (1996), pp586–607.
M. N. Vrahatis, Solving systems of non-linear equations using the non zero value of the topological degree, ACM Trans. Math. Software, Vol. 14 (1988), pp312–329.
M. N. Vrahatis, CHABIS: A mathematical software package for locating and evaluating roots of systems of non-linear equations, ACM Trans. Math. Software, Vol. 14 (1988), pp330–336.
H. J. Sira-Ramirez, S. H. Zak, The adaptation of perceptrons with applications to inverse dynamics identification of unknown dynamic systems, IEEE Trans. Systems, Man &: Cybernetics, Vol. 21 (1991), pp634–643.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer Science+Business Media New York
About this chapter
Cite this chapter
Magoulas, G.D., Vrahatis, M.N., Grapsa, T.N., Androulakis, G.S. (1997). A Training Method for Discrete Multilayer Neural Networks. In: Ellacott, S.W., Mason, J.C., Anderson, I.J. (eds) Mathematics of Neural Networks. Operations Research/Computer Science Interfaces Series, vol 8. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-6099-9_42
Download citation
DOI: https://doi.org/10.1007/978-1-4615-6099-9_42
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-7794-8
Online ISBN: 978-1-4615-6099-9
eBook Packages: Springer Book Archive