Comparison of Two Classification Methodologies on a Real-World Biomedical Problem

  • Ray Somorjai
  • Arunas Janeliunas
  • Richard Baumgartner
  • Sarunas Raudys
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2396)


We compare two diverse classification strategies on real-life biomedical data. One is based on a genetic algorithm-driven feature extraction method, combined with data fusion and the use of a simple, single classifier, such as linear discriminant analysis. The other exploits a single layer perceptron-based, data-driven evolution of the optimal classifier, and data fusion. We discuss the intricate interplay between dataset size, the number of features, and classifier complexity, and suggest different techniques to handle such problems.


Linear Discriminant Analysis Fusion Rule Classifier Fusion Classification Methodology Multiple Classifier System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Kittler J., Roli F. (eds): Multiple Classifier Systems. Springer Lecture Notes in Computer Science, Springer Vol. 1857 (2000), Vol. 2096 (2001)zbMATHGoogle Scholar
  2. 2.
    Ho T.K.: Data complexity analysis for classifier combination. In: Multiple Classifier Systems. J. Kittler and F. Roli (eds). Springer Lecture Notes in Computer Science, Springer Vol. 2096, (2001), 53–67Google Scholar
  3. 3.
    Raudys S.: Evolution and generalization of a single neuron. I. SLP as seven statistical classifiers. Neural Networks 11, 1998, 283–96CrossRefGoogle Scholar
  4. 4.
    Raudys S.: Statistical and Neural Classifiers: An integrated approach to design. Springer, London, (2001) 312zbMATHGoogle Scholar
  5. 5.
    Pivoriunas V.: The linear discriminant function for the identification of spectra. In: S Raudys (editor), Statistical Problems of Control 27, (1978), 71–90. Institute of Mathematics and Informatics, Vilnius (in Russian)Google Scholar
  6. 6.
    Skurichina M., Raudys S., Duin R.P.W.: K-nearest neighbours directed noise injection in multilayer perceptron training. IEEE Trans. On Neural Networks. 11(2) (2000), 504–511CrossRefGoogle Scholar
  7. 7.
    Janeliūnas A.: Bias correction of linear classifiers in the classifier combination scheme. In: Proceedings of the 2nd International Conference on Neural Networks and Artificial Intelligence, BSUIR, Minsk, (2001) 91–98Google Scholar
  8. 8.
    Nikulin A., Dolenko B., Bezabeh T., Somorjai R.: Near-optimal region selection for feature space reduction: novel preprocessing methods for classifying MR spectra. NMR in Biomedicine, 11 (1998) 209–216CrossRefGoogle Scholar
  9. 9.
    Mountford C, Somorjai R., Gluch L., Malycha P., Lean C, Russell P., Bilous M., Barraclough B., Gillett D., Himmelreich U., Dolenko B., Nikulin A., Smith I.: MRS on breast fine needle aspirate biopsy determines pathology, vascularization and nodal involvement. Br. J. Surg. 88 (2001) 1234–1240CrossRefGoogle Scholar
  10. 10.
    Somorjai R.L., Dolenko B., Nikulin A., Nickerson P., Rush D., Shaw A., de Glogowski M., Rendell J., Deslauriers R. (2002) Distinguishing normal allografts from biopsy-proven rejections: application of a three-stage classification strategy to urine MR and IR spectra. Vibrational Spectroscopy 28:(1) 97–102CrossRefGoogle Scholar
  11. 11.
    Zhilkin P., Somorjai R.: Application of several methods of classification fusion to magnetic resonance spectra. Connection Science 8(3) (1996) 427–442CrossRefGoogle Scholar
  12. 12.
    Jain A.K., Duin R.P.W., Mao J.: Statistical pattern recognition: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (2000) 4–37CrossRefGoogle Scholar
  13. 13.
    Somorjai R.L., Nikulin A.E., Pizzi N., Jackson D., Scarth G., Dolenko B., Gordon H., Russell P., Lean C.L., Delbridge L., Mountford C.E., Smith I.C.P.: Computerized consensus diagnosis: a classification strategy for the robust analysis of MR spectra. I. Application to 1H spectra of thyroid neoplasms. Magn. Reson. Med. 33 (1995) 257–263CrossRefGoogle Scholar
  14. 14.
    Somorjai R.L., Dolenko B., Nikulin A.E., Pizzi N., Scarth G., Zhilkin P., Halliday W., Fewer J., Hill N., Ross I., West M., Smith I., Donnelly M., Kuesel A., Brière K.: Classification of 1H MR spectra of human brain biopsies: The influence of preprocessing and computerized consensus diagnosis on classification accuracy. J Magn Reson Imaging 6 (1996) 437–444CrossRefGoogle Scholar
  15. 15.
    Efron B., Tibshirani R.: An Introduction to the Bootstrap. Monographs on Statistics and Applied Probability, Cox D., Hinkley D., Reid N., Rubin D. and Silverman B.W. (General Eds.) Vol. 57 Chapman & Hall, London (1993)Google Scholar
  16. 16.
    Cohen J.: Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin 70 (1968) 213–220CrossRefGoogle Scholar
  17. 17.
    Wolpert D.H.: Stacked generalization. Neural Networks 5 (1992) 241–259CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Ray Somorjai
    • 1
  • Arunas Janeliunas
    • 2
  • Richard Baumgartner
    • 1
  • Sarunas Raudys
    • 2
  1. 1.Institute for BiodiagnosticsNRCCWinnipegCanada
  2. 2.Department of Mathematics and InformaticsVilnius UniversityVilnius, Lithuania

Personalised recommendations