Evolution of Multi-class Single Layer Perceptron

  • Sarunas Raudys
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4432)


While training single layer perceptron (SLP) in two-class situation, one may obtain seven types of statistical classifiers including minimum empirical error and support vector (SV) classifiers. Unfortunately, both classifiers cannot be obtained automatically in multi-category case. We suggest designing K(K-1)/2 pair-wise SLPs and combine them in a special way. Experiments using K=24 class chromosome and K=10 class yeast infection data illustrate effectiveness of new multi-class network of the single layer perceptrons.


Decision Boundary Fusion Rule Pattern Class Training Vector Support Vector Classifier 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge Univ. Press, Cambridge (2000)Google Scholar
  2. 2.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley, Hoboken (2000)Google Scholar
  3. 3.
    Haykin, S.: Neural Networks: A comprehensive foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1999)MATHGoogle Scholar
  4. 4.
    Raudys, S.: How good are support vector machines? Neural Networks 13, 9–11 (2000)Google Scholar
  5. 5.
    Raudys, S.: Statistical and Neural Classifiers: An integrated approach to design. Springer, Heidelberg (2001)MATHGoogle Scholar
  6. 6.
    Raudys, S.: Evolution and generalization of a single neurone. I. SLP as seven statistical classifiers. Neural Networks 11, 283–296 (1998)CrossRefGoogle Scholar
  7. 7.
    Raudys, Š., Denisov, V., Bielskis, A.A.: A pool of classifiers by SLP: A multi-class case. In: Campilho, A., Kamel, M. (eds.) ICIAR 2006. LNCS, vol. 4142, pp. 47–56. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  8. 8.
    Hsu, C.W., Lin, C.J.: A comparison on methods for multi-class support vector machines. IEEE Trans. on Neural Networks 13, 415–425 (2002)CrossRefGoogle Scholar
  9. 9.
    Le Cun, Y., Kanter, I., Solla, S.: Eigenvalues of covariance matrices: application to neural-network learning. Physical Review Letters 66, 2396–2399 (1991)CrossRefGoogle Scholar
  10. 10.
    Halkaaer, S., Winter, O.: The effect of correlated input data on the dynamics of learning. In: Mozer, M.C., Jordan, M.I., Petsche, T. (eds.) Advances in Neural Information Processing Systems, vol. 9, pp. 169–175. MIT Press, Cambridge (1996)Google Scholar
  11. 11.
    Saudargiene, A.: Structurization of the covariance matrix by process type and block diagonal models in the classifier design. Informatica 10, 245–269 (1999)MATHGoogle Scholar
  12. 12.
    Raudys, S., Saudargiene, A.: First-order tree-type dependence between variables and classification performance. IEEE Trans. on Pattern Analysis and Machine Intelligence 23, 233–239 (2001)CrossRefGoogle Scholar
  13. 13.
    Duin, R.P.W.: Nearest neighbor interpolation for error estimation and classifier optimization. In: Hogd, K.A., Braathen, B., Heia, K. (eds.) Proc. of the 8th Scandinavian Conference on Image Analysis, Tromso, Norway, pp. 5–6 (1993)Google Scholar
  14. 14.
    Skurichina, M., Raudys, S., Duin, R.P.W.: K-NN directed noise injection in multilayer perceptron training. IEEE Trans. on Neural Networks 11, 504–511 (2000)CrossRefGoogle Scholar
  15. 15.
    Raudys, S.: Trainable Fusion Rules. II. Small sample-size effects. Neural Networks 19, 1517–1527 (2006)MATHCrossRefGoogle Scholar
  16. 16.
    Hastie, T., Tibshirani, R.: Classification by pair-wise coupling. The Annals of Statistics 26, 451–471 (1998)MATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Wu, T.-F., Lin, C.-J., Weng, R.C.: Probability estimates for multi-class classification by pair-wise coupling. J. of Machine Learning Research 5, 975–1005 (2004)MathSciNetGoogle Scholar
  18. 18.
    Giacinto, G., Roli, F., Fumera, G.: Selection of classifiers based on multiple classifier behaviour. In: Amin, A., Pudil, P., Ferri, F.J., Iñesta, J.M. (eds.) SPR 2000 and SSPR 2000. LNCS, vol. 1876, pp. 87–93. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  19. 19.
    Raudys, S.: Trainable Fusion Rules. I. Large sample size case. Neural Networks 19, 1506–1516 (2006)MATHCrossRefGoogle Scholar
  20. 20.
    Pekalska, E., Duin, R.P.W.: Dissimilarity representations allow for building good classifiers. Pattern Recognition Letters 23, 943–956 (2002)MATHCrossRefGoogle Scholar
  21. 21.
    Pizzi, N.J., Pedrycz, W.: Classification of magnetic resonance spectra using parallel randomized feature selection. In: IJCNN04 (2004)Google Scholar
  22. 22.
    Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines (2001), Available at http://www.csie.ntu.edu.tw/~cjlin/libsvm

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Sarunas Raudys
    • 1
  1. 1.Vilnius Gediminas Technical University, Sauletekio 11, Vilnius, LT-10223Lithuania

Personalised recommendations