Abstract
Probabilistic neural networks (PNN), introduced by Specht [255] – [258], have their predecessors in the theory of statistical pattern classification. In the fifties and sixties problems of statistical pattern classification in the stationary case were accomplished by means of parametric methods, using the available apparatus of statistical mathematics (e.g. [35], [75], [89], [90], [293]). The knowledge of the probability density to an accuracy of unknown parameters was assumed and the parameters were estimated based on the learning sequence. Typical techniques included maximum likelihood and Bayesian approaches. Having observed tendencies present in literature within the last twenty years we should say that these methods have been almost completely replaced by the non-parametric approach (see e.g. [67], [70], [71], [79], [80], [81], [104], [105], [113], [114], [122], [175], [191], [195], [272], [296], [297]). In the non-parametric approach it is assumed that a functional form of probability ensities is unknown. The latter are estimated by non-parametric estimators.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Rutkowski, L. (2004). Introduction to Probabilistic Neural Networks. In: New Soft Computing Techniques for System Modeling, Pattern Classification and Image Processing. Studies in Fuzziness and Soft Computing, vol 143. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-40046-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-540-40046-2_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-05820-2
Online ISBN: 978-3-540-40046-2
eBook Packages: Springer Book Archive