Skip to main content
Log in

A Multi-Output-Layer Perceptron

  • Articles
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

This paper investigates the possibility of improving the classification capability of single-layer and multilayer perceptrons by incorporating additional output layers. This Multi-Output-Layer Perceptron (MOLP) is a new type of constructive network, though the emphasis is on improving pattern separability rather than network efficiency. The MOLP is trained using the standard back-propagation (BP) algorithm. The studies are concentrated on realizations of arbitrary functions which map from an x-dimensional input MOLP, all problems existing in an original n-dimensional space in the hidden layer are transformed to a higher (n +1)-dimensional space, so that the possibility of linear separability is increased. Experimental investigations show that the classification ability of the MOLP is superior to that of an equivalent MLP. In general, this performance increase can be achieved with shorter training times and simpler network architectures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Lippmann RP. An Introduction to Computing with Neural Nets. IEEE ASSP Magazine April 1987; 4: 4–22

    Google Scholar 

  2. Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. In Rumelhart DE, McClelland JL (eds). Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol 1: Foundations. MIT Press, 1986

  3. Frean M. The upstart algorithm: a method for constructing and training feedforward neural networks, Neural Computation 1990; 1: 198–209

    Google Scholar 

  4. Fahlman SE, Lebiere C. The cascade-correlation learning architecture. In Touretzky DS (ed) Advances in Neural information Processing Systems 2, Morgen Kaufmann, 1990

  5. Mezard M, Nadal JP. Learning in feedforward layered networks: the tiling algorithm. J Physics A 1989; 22: 2191–2203

    Google Scholar 

  6. Nadal JP. Study of a growth algorithm for a feedforward neural network. Int J Neural Systems 1989; 1: 55–59

    Google Scholar 

  7. Ash T. Dynamic node creation in backpropagation networks. Connection Science 1989; 1(4)

  8. Rosenblatt F. Principles of Neurodynamics. Spartan Books, New York, 1959

    Google Scholar 

  9. Smieja FJ. Neural network constructive algorithms: trading generalization for learning efficiency? Circuits Systems Signal Processing 1993; 12(2): 331–374

    Google Scholar 

  10. Zheng GH, Owens FJ. A multi-layer neural network with a multi-output layer. Proc Int Conf Neural Networks and Signal Processing 1993; 46–50

  11. Tou JT, Gonzalez RC. Pattern Recognition Principles. Addison-Wesley, 1974

  12. Gorman RP, Sejnowski TJ. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks 1988; 1: 75–89

    Google Scholar 

  13. Mirchardani G, Cao W. On hidden nodes for neural nets. IEEE Circuits and Systems 1989; 36(5)

  14. Peeling SM, Moore RK. Isolated digit recognition experiments using the multi-layer perceptron. Speech Communication 1988; 7(4): 403–410

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Owens, F.J., Zheng, G.H. & Irvine, D.A. A Multi-Output-Layer Perceptron. Neural Comput & Applic 4, 10–20 (1996). https://doi.org/10.1007/BF01413865

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01413865

Keywords

Navigation