Neural Discriminant Analysis
They are quite expressive, e.g., probability distributions defined by Chow-Expansions, Unique Probabilistic Automata or Unique Markov Models can also succinctly be written as PDF's.
It is possible to obtain with high confidence almost optimal decisions for classification problems which can be modelled by PDF's. The number of training examples needed for that is bounded by a polynomial of low degree (in the relevant parameters).
The evaluation of the training examples can be implemented on shallow neural nets.
Unable to display preview. Download preview PDF.
- Naoki Abe and Manfred K. Warmuth. On the computational complexity of approximating distributions by probabilistic automata. In Proceedings of the 3rd Annual Workshop on Computational Learning Theory, pages 52–66, 1990.Google Scholar
- Svetlana Anoulova and Stefan Pölt. Using Kullback-Leibler divergence in Learning Theory. Research report in preparation.Google Scholar
- Herman Chernoff. A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Statist., 23:493–507, 1952.Google Scholar
- Jorge Ricardo Cuellar and Hans Ulrich Simon. Neural Discriminant Analysis. Forschungsbericht Nr. 469 der Universität Dortmund, 1993.Google Scholar
- R. Duin. A sample size dependent error bound. In Proceedings of the 3rd International Conference in Pattern Recognition, 1976.Google Scholar
- Donald E. Knuth. The Art of Computer Programming: Fundamental Algorithms, volume 1. Addison Wesley, second edition, 1973.Google Scholar
- Stefan Pölt. Extensions of the Pab-Decision Model. Forschungsbericht Nr. 468 der Universität Dortmund, 1993.Google Scholar