Simple and Effective Connectionist Nonparametric Estimation of Probability Density Functions

  • Edmondo Trentin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4087)


Estimation of probability density functions (pdf) is one major topic in pattern recognition. Parametric techniques rely on an arbitrary assumption on the form of the underlying, unknown distribution. Nonparametric techniques remove this assumption In particular, the Parzen Window (PW) relies on a combination of local window functions centered in the patterns of a training sample. Although effective, PW suffers from several limitations. Artificial neural networks (ANN) are, in principle, an alternative family of nonparametric models. ANNs are intensively used to estimate probabilities (e.g., class-posterior probabilities), but they have not been exploited so far to estimate pdfs. This paper introduces a simple neural-based algorithm for unsupervised, nonparametric estimation of pdfs, relying on PW. The approach overcomes the limitations of PW, possibly leading to improved pdf models. An experimental demonstration of the behavior of the algorithm w.r.t. PW is presented, using random samples drawn from a standard exponential pdf.


Window Function Target Output Nonparametric Model Nonparametric Technique Alternative Family 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bengio, Y.: Neural Networks for Speech and Sequence Recognition. International Thomson Computer Press, London (1996)Google Scholar
  2. 2.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995)Google Scholar
  3. 3.
    Bourlard, H., Morgan, N.: Connectionist Speech Recognition. A Hybrid Approach, vol. 247. Kluwer Academic Publishers, Boston (1994)Google Scholar
  4. 4.
    Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. Wiley, New York (1973)MATHGoogle Scholar
  5. 5.
    Fukunaga, K.: Statistical Pattern Recognition, 2nd edn. Academic Press, San Diego (1990)MATHGoogle Scholar
  6. 6.
    Haykin, S.: Neural Networks (A Comprehensive Foundation). Macmillan, New York (1994)MATHGoogle Scholar
  7. 7.
    Husmeier, D.: Neural Networks for Conditional Probability Estimation. Springer, London (1999)Google Scholar
  8. 8.
    Mood, A.M., Graybill, F.A., Boes, D.C.: Introduction to the Theory of Statistics, 3rd edn. McGraw-Hill International, Singapore (1974)MATHGoogle Scholar
  9. 9.
    Park, J., Sandberg, I.W.: Universal approximation using radial-basis-function networks. Neural Computation 3(2), 246–257 (1991)CrossRefGoogle Scholar
  10. 10.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D.E., McClelland, J.L. (eds.) Parallel Distributed Processing, vol. 1, ch. 8, pp. 318–362. MIT Press, Cambridge (1986)Google Scholar
  11. 11.
    Trentin, E.: Networks with trainable amplitude of activation functions. Neural Networks 14(4–5), 471–493 (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Edmondo Trentin
    • 1
  1. 1.Dipartimento di Ingegneria dell’InformazioneUniversità di SienaSienaItaly

Personalised recommendations