Advertisement

Ensemble MLP Classifier Design

  • Terry Windeatt
Part of the Studies in Computational Intelligence book series (SCI, volume 137)

Abstract

Multi-layer perceptrons (MLP) make powerful classifiers that may provide superior performance compared with other classifiers, but are often criticized for the number of free parameters. Most commonly, parameters are set with the help of either a validation set or cross-validation techniques, but there is no guarantee that a pseudo-test set is representative. Further difficulties with MLPs include long training times and local minima. In this chapter, an ensemble of MLP classifiers is proposed to solve these problems. Parameter selection for optimal performance is performed using measures that correlate well with generalisation error.

Keywords

Linear Discriminant Analysis Base Classifier Code Word Face Database Classifier Decision 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1997)Google Scholar
  2. 2.
    Windeatt, T., Tebbs, R.: Spectral technique for hidden layer neural network training. Pattern Recognition Letters 18(8), 723–731 (1997)CrossRefGoogle Scholar
  3. 3.
    Windeatt, T.: Recursive Partitioning for combining multiple classifiers. Neural Processing Letters 13(3), 221–236 (2001)zbMATHCrossRefGoogle Scholar
  4. 4.
    Windeatt, T.: Vote Counting Measures for Ensemble Classifiers. Pattern Recognition 36(12), 2743–2756 (2003)zbMATHCrossRefGoogle Scholar
  5. 5.
    Tikhonov, A.N., Arsenin, V.A.: Solutions of ill-posed problems. Winston & Sons, Washington (1977)zbMATHGoogle Scholar
  6. 6.
    Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall, Englewood Cliffs (1999)zbMATHGoogle Scholar
  7. 7.
    Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. Chapman & Hall, Boca Raton (1993)zbMATHGoogle Scholar
  8. 8.
    Bylander, T.: Estimating generalisation error two-class datasets using out-of-bag estimate. Machine Learning 48, 287–297 (2002)zbMATHCrossRefGoogle Scholar
  9. 9.
    Hansen, L.K., Salamon, P.: Neural Network Ensembles. IEEE Trans. Pattern Analysis and Machine Intelligence 12(10), 993–1001 (1990)CrossRefGoogle Scholar
  10. 10.
    Fukunaga, K.: Introduction to statistical pattern recognition. Academic Press, London (1990)zbMATHGoogle Scholar
  11. 11.
    Ho, T.K., Basu, M.: Complexity measures of supervised classification problems. IEEE Trans. PAMI 24(3), 289–300 (2002)Google Scholar
  12. 12.
    Windeatt, T.: Accuracy/Diversity and ensemble classifier design. IEEE Trans. Neural Networks 17(5), 1194–1211 (2006)CrossRefGoogle Scholar
  13. 13.
    Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles. Machine Learning 51, 181–207 (2003)zbMATHCrossRefGoogle Scholar
  14. 14.
    Windeatt, T.: Diversity Measures for Multiple Classifier System Analysis and Design. Information Fusion 6(1), 21–36 (2004)CrossRefGoogle Scholar
  15. 15.
    Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research 2, 263–286 (1995)zbMATHGoogle Scholar
  16. 16.
    Sejnowski, T.J., Rosenberg, C.R.: Parallel networks that learn to pronounce english text. Journal of Complex Systems 1(1), 145–168 (1987)zbMATHGoogle Scholar
  17. 17.
    Windeatt, T., Ghaderi, R.: Coding and Decoding Strategies for Multi-class Learning Problems. Information Fusion 4(1), 11–21 (2003)CrossRefGoogle Scholar
  18. 18.
    Allwein, E.L., Schapire, R.E., Singer, Y.: Reducing Multi-class to Binary: A Unifying Approach for Margin Classifiers. Journal of Machine Learning Research 1, 113–141 (2000)CrossRefMathSciNetGoogle Scholar
  19. 19.
    Schapire, R.E.: Using Output Codes to Boost Multi-class Learning Problems. In: 14th Int. Conf. of Machine Learning, pp. 313–321. Morgan Kaufmann, San Francisco (1997)Google Scholar
  20. 20.
    Prechelt, L.: Proben1: A Set of Neural Network Benchmark Problems and Benchmarking Rules, Tech Report 21/94, Univ. Karlsruhe, Germany (1994)Google Scholar
  21. 21.
    Merz, C.J., Murphy, P.M.: UCI Repository of Machine Learning Databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
  22. 22.
    Riedmiller, M., Braun, H.: A Direct Adaptive Method for Faster Backpropagation Learning: The {RPROP} Algorithm. In: Proc. Intl. Conf. on Neural Networks, San Francisco, Calif., pp. 586–591 (1993)Google Scholar
  23. 23.
    Kittler, J., Ghaderi, R., Windeatt, T., Matas, J.: Face Verification via Error Correcting Output Codes. Image and Vision Computing 21(13-14), 1163–1169 (2003)CrossRefGoogle Scholar
  24. 24.
    Er, M.J., Wu, S., Toh, H.L.: Face Recognition with RBF Neural Networks. IEEE Trans. Neural Networks 13(3), 697–710 (2002)CrossRefGoogle Scholar
  25. 25.
    Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artificial Intelligence Journal, special issue on relevance 97(1-2), 273–324 (1997)zbMATHGoogle Scholar
  26. 26.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. Journal of Machine Learning Research 3, 1157–1182 (2003)zbMATHCrossRefGoogle Scholar
  27. 27.
    Windeatt, T., Prior, M.: Stopping Criteria for Ensemble-based Feature Selection. In: Haindl, M., Kittler, J., Roli, F. (eds.) MCS 2007. LNCS, vol. 4472. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  28. 28.
    Windeatt, T., Prior, M., Effron, N., Intrator, N.: Ensemble-based Feature Selection Criteria. In: Perner, P. (ed.) MLDM 2007. LNCS (LNAI), vol. 4571. Springer, Heidelberg (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Terry Windeatt
    • 1
  1. 1.Centre for Vision, Speech and Signal Proc., Department of Electronic EngineeringUniversity of SurreyGuildfordUnited Kingdom

Personalised recommendations