Abstract
In this Chapter, we consider one of the most interesting applications of MVN - its use as a basic neuron in a multilayer neural network based on multi-valued neurons (MLMVN). In Section 4.1, we consider basic ideas of the derivative-free backpropagation learning algorithm for MLMVN. In Section 4.2, we derive the error backpropagation rule for MLMVN with a single hidden layer and a single output neuron and then for the most general case of arbitrary number of layers and neurons in the layers. In Section 4.3, the Convergence Theorem for the MLMVN learning algorithm is proven. In Section 4.4, we consider in detail how MLMVN learns when solving a classification problem. We also consider in the same section how MLMVN solves a problem of the Mackey-Glass time series prediction. Concluding remarks are given in Section 4.5.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Aizenberg, I. (2011). Multilayer Feedforward Neural Network Based on Multi-Valued Neurons (MLMVN). In: Complex-Valued Neural Networks with Multi-Valued Neurons. Studies in Computational Intelligence, vol 353. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-20353-4_4
Download citation
DOI: https://doi.org/10.1007/978-3-642-20353-4_4
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-20352-7
Online ISBN: 978-3-642-20353-4
eBook Packages: EngineeringEngineering (R0)