Improved Uniformity Enforcement in Stochastic Discrimination

  • Matthew Prior
  • Terry Windeatt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5519)


There are a variety of methods for inducing predictive systems from observed data. Many of these methods fall into the field of study of machine learning. Some of the most effective algorithms in this domain succeed by combining a number of distinct predictive elements to form what can be described as a type of committee. Well known examples of such algorithms are AdaBoost, bagging and random forests. Stochastic discrimination is a committee-forming algorithm that attempts to combine a large number of relatively simple predictive elements in an effort to achieve a high degree of accuracy. A key element of the success of this technique is that its coverage of the observed feature space should be uniform in nature. We introduce a new uniformity enforcement method, which on benchmark datasets, leads to greater predictive efficiency than the currently published method.


Feature Space Random Forest Test Error Normalise Standard Deviation Coverage Array 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Asuncion, A., Newman, D.J.: UCI machine learning repository (2007)Google Scholar
  2. 2.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHGoogle Scholar
  3. 3.
    Breiman, L.: Bias, variance, and arcing classifiers (1996)Google Scholar
  4. 4.
    Chen, D., Huang, P., Cheng, X.: A concrete statistical realization of kleinberg’s stochastic discrimination for pattern recognition, part i. two-class classification. Annals of Statistics 31(5), 1393–1412 (2003)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  6. 6.
    Garner, S.R.: Weka: The waikato environment for knowledge analysis. In: Proc. of the New Zealand Computer Science Research Students Conference, pp. 57–64 (1995)Google Scholar
  7. 7.
    Ho, T.K.: The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 832–844 (1998)CrossRefGoogle Scholar
  8. 8.
    Kleinberg, E.M.: Stochastic discrimination. Annals of Mathematics and Artificial Intelligence 1 (1990)Google Scholar
  9. 9.
    Kleinberg, E.M.: On the algorithmic implementation of stochastic discrimination. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(5), 473–490 (2000)CrossRefGoogle Scholar
  10. 10.
    Kleinberg, E.M., Ho, T.K.: Pattern recognition by stochastic modeling. In: Proceedings of the Third International Workshop on Frontiers in Handwriting Recognition, pp. 175–183. Partners Press (1993)Google Scholar
  11. 11.
    Prior, M., Windeatt, T.: Over-fitting in ensembles of neural network classifiers within ecoc frameworks. In: Oza, N.C., Polikar, R., Kittler, J., Roli, F. (eds.) MCS 2005. LNCS, vol. 3541, pp. 286–295. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  12. 12.
    Prior, M., Windeatt, T.: Parameter tuning using the out-of-bootstrap generalisation error estimate for stochastic discrimination and random forests. In: International Conference on Pattern Recognition, pp. 498–501 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Matthew Prior
    • 1
  • Terry Windeatt
    • 1
  1. 1.Centre for Vision Speech and Signal ProcessingUniversity of SurreyGuildfordUK

Personalised recommendations