Approximation Properties of Perceptrons

  • Andrzej Bielecki
Part of the Studies in Computational Intelligence book series (SCI, volume 770)


As it was mentioned in Sect.  8.1, an untrained perceptron can be treated as a family of functions \(\mathbb {R}^n\rightarrow \mathbb {R}^m\) indexed by a vector set of all its weights. A given training set, in turn, can be regarded as a set of the points to which a mapping should be approximated in the best way. The investigations of approximation abilities of neural networks are focused on the existence of an arbitrarily close approximation. They are also focused on the problem how accuracy depends on a complexity of a perceptron. In this chapter a few basic theorems that concern the approximation properties of perceptrons are discussed. The presented theorems are the classical results. In this monograph they are presented without the proofs which can be found in literature.

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Faculty of Electrical Engineering, Automation, Computer Science and Biomedical EngineeringAGH University of Science and TechnologyCracowPoland

Personalised recommendations