A Universal Approximator Network for Learning Conditional Probability Densities
A general approach is developed to learn the conditional probability density for a noisy time series. A universal architecture is proposed, which avoids difficulties with the singular low-noise limit. A suitable error function is presented enabling the probability density to be learnt. The method is compared with other recently developed approaches, and its effectiveness demonstrated on a time series generated from a non-trivial stochastic dynamical system.
KeywordsHide Layer Output Weight Conditional Probability Distribution Kernel Width Conditional Probability Density
Unable to display preview. Download preview PDF.
- Allen DW and Taylor JG, Learning Time Series by Neural Networks, Proc ICANN (1994), ed Marinaro M and Morasso P, Springer, pp529–532.Google Scholar
- Neuneier R, Hergert F, Finnoff W and Ormoneit D, Estimation of Conditional Densities: A Comparison of Neural Network Approaches, Proc ICANN (1994), ed Marinaro M and Morasso P, Springer, pp689–692.Google Scholar
- Papoulis, A., Probability, Random Variables and Stochastic Processes, McGraw-Hill (1984).Google Scholar
- Srivastava AN and Weigend AS, Computing the Probability Density in Connectionist Regression, Proc ICANN (1994), ed Marinaro M and Morasso P, Springer, pp685–688.Google Scholar
- Weigend AS and Nix DA, Predictions with Confidence Intervals (Local Error Bars), Proc ICONIP (1994), ed Kim M-W and Lee S-Y, Korea Advanced Institute of Technology, pp847–852.Google Scholar