Skip to main content

A Universal Approximator Network for Learning Conditional Probability Densities

  • Chapter
Book cover Mathematics of Neural Networks

Part of the book series: Operations Research/Computer Science Interfaces Series ((ORCS,volume 8))

Abstract

A general approach is developed to learn the conditional probability density for a noisy time series. A universal architecture is proposed, which avoids difficulties with the singular low-noise limit. A suitable error function is presented enabling the probability density to be learnt. The method is compared with other recently developed approaches, and its effectiveness demonstrated on a time series generated from a non-trivial stochastic dynamical system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allen DW and Taylor JG, Learning Time Series by Neural Networks, Proc ICANN (1994), ed Marinaro M and Morasso P, Springer, pp529–532.

    Google Scholar 

  2. Neuneier R, Hergert F, Finnoff W and Ormoneit D, Estimation of Conditional Densities: A Comparison of Neural Network Approaches, Proc ICANN (1994), ed Marinaro M and Morasso P, Springer, pp689–692.

    Google Scholar 

  3. Papoulis, A., Probability, Random Variables and Stochastic Processes, McGraw-Hill (1984).

    Google Scholar 

  4. Srivastava AN and Weigend AS, Computing the Probability Density in Connectionist Regression, Proc ICANN (1994), ed Marinaro M and Morasso P, Springer, pp685–688.

    Google Scholar 

  5. Weigend AS and Nix DA, Predictions with Confidence Intervals (Local Error Bars), Proc ICONIP (1994), ed Kim M-W and Lee S-Y, Korea Advanced Institute of Technology, pp847–852.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer Science+Business Media New York

About this chapter

Cite this chapter

Husmeier, D., Allen, D., Taylor, J.G. (1997). A Universal Approximator Network for Learning Conditional Probability Densities. In: Ellacott, S.W., Mason, J.C., Anderson, I.J. (eds) Mathematics of Neural Networks. Operations Research/Computer Science Interfaces Series, vol 8. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-6099-9_32

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-6099-9_32

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7794-8

  • Online ISBN: 978-1-4615-6099-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics