Skip to main content

Multiple Classification Systems in the Context of Feature Extraction and Selection

  • Conference paper
  • First Online:
Book cover Multiple Classifier Systems (MCS 2002)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2364))

Included in the following conference series:

Abstract

Parallels between Feature Extraction / Selection and Multiple Classification Systems methodologies are considered. Both approaches allow the designer to introduce prior information about the pattern recognition task to be solved. However, both are heavily affected by computational difficulties and by the problem of small sample size / classifier complexity. Neither approach is capable of selecting a unique data analysis algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Raudys S. On the problems of sample size in pattern recognition. In: Pugatchiov V.S. (editor) Detection, Pattern Recognition and Experiment Design, Proceedings of the 2nd All-Union Conference Statistical Methods in Control Theory. Nauka, Moscow. 2:64–76 (in Russian), 1970.

    Google Scholar 

  2. Kanal L. and Chandrasekaran B. On dimensionality and sample size in statistical pattern classification. Pattern Recognition 3:238–55, 1971.

    Article  Google Scholar 

  3. Vapnik V.N. and Chervonenkis D.Ya. Theory of Pattern Recognition: Statistical learning problems. Nauka, Moscow (in Russian), 1974.

    Google Scholar 

  4. Geman S.L., Bienenstock E., Doursat R. Neural networks and bias/variance dilemma. Neural Computation 4:1–58, 1992.

    Article  Google Scholar 

  5. Raudys S. Statistical and Neural Classifiers: An integrated approach to design. Springer, London, 2001.

    MATH  Google Scholar 

  6. Kittler J. and F. Roli (eds.). Multiple Classifier Systems. Springer Lecture Notes in Computer Science, Springer Vol. 1857 (2000), Vol. 2096, 2001.

    Google Scholar 

  7. Ho T.K. Data complexity analysis for classifier combination. Multiple Classifier Systems. Springer Lecture Notes in Computer Science, Springer Vol. 2096(2001): 53–67.

    Google Scholar 

  8. Raudys A., Long J. A.. MLP based linear feature extraction for nonlinearly separable data, Pattern Analysis & Applications, 4(4): 227–34, 2001.

    Article  MATH  MathSciNet  Google Scholar 

  9. Ripley B.D. Pattern Recognition and Neural Networks. Cambridge University press, Cambridge, 1996.

    MATH  Google Scholar 

  10. Tubbs J.D., Coberley W.A., Young D.M. (1982) Linear dimension reduction and Bayes classification with unkown parameters. Pattern Recognition 14(3):167–172, 1982.

    Article  Google Scholar 

  11. Haykin S. Neural Networks: A comprehensive foundation. 2nd edition. Prentice-Hall, Englewood Cliffs, NJ, 1999.

    MATH  Google Scholar 

  12. Duin R.P.W. Classifiers for dissimilarity-based pattern recognition. Proc. 15 th Int. Conf of Pattern Recognition. IEEE press, Los Alamitos, 2: 1–7, 2000.

    Chapter  Google Scholar 

  13. Raudys S and Tamosiunaite M. Biologically inspired architecture of feedforward networks for signal classification. Advances in Pattern Recognition. Ferri F, Pudil P (eds). Springer Lecture Notes in Computer Science. Vol. 1876, pp. 727–736, 2000.

    Chapter  Google Scholar 

  14. Giacinto G. and Roli F. Dynamic classifier selection based on multiple classifier behaviour. Pattern Recognition, 34(9):179–181, 2001.

    Article  Google Scholar 

  15. Kuncheva L.I., Bezdek J.C. Duin and RP.W. Decision templates for multiple classifier fusion: and experimental comparison. Pattern Recognition 34: 299–314, 2001.

    Article  MATH  Google Scholar 

  16. Fukunaga K. Introduction to Statistical Pattern Recognition. 2nd edition. Academic Press, New York, 1990.

    MATH  Google Scholar 

  17. Roli F., Raudys S. and Marcialis G.L. An experimental comparison of fixed and trained fusion rules for crisp classifiers. In: Kittler J. and F. Roli (eds.). Multiple Classifier Systems. Springer Lecture Notes in Computer Science, Springer, 2002.

    Google Scholar 

  18. Pikelis V. Calculating statistical characteristics of experimental process for selecting the best version. In: Raudys S. (ed.) Statistical Problems of Control, 93:46–56. Institute of Mathematics and Informatics, Vilnius (in Russian) 1991.

    Google Scholar 

  19. Raudys S. Influence of sample size on the accuracy of model selection in pattern recognition. In: S Raudys S.(ed.), Statistical Problems of Control, 50:9–30. Institute of Mathematics and Informatics, Vilnius (in Russian), 1981.

    Google Scholar 

  20. Janeliunas A. and Raudys S. Reduction of the boasting bias of linear experts. In: Kittler J. and F. Roli (eds.). Multiple Classifier Systems. Springer Lecture Notes in Computer Science, Springer, 2002.

    Google Scholar 

  21. Raudys S. Experts’ bias in trainable fusion rule. IEEE Transactions on Pattern Analysis and Machine Intelligence (2001, submitted).

    Google Scholar 

  22. Güler C., Sankur B., Kahya Y., Skurichina M., Raudys S. Classification of respiratory sound patterns by means of cooperative neural networks. In: G. Ramponi, G.L. Sicuranza, S. Carrato, S. Marsi (editors), Proceedings of 8th European Signal Processing Conference (isbn 88-86179-83-9). Edizioni Lint, Trieste, 1996.

    Google Scholar 

  23. Wolpert D.H. Stacked generalization. Neural Networks 5: 240–259, 1992.

    Article  Google Scholar 

  24. Somorjai R.L., Dolenko B., Nikulin A., Nickerson P., Rush D., Shaw A., de Glogowski M., Rendell J., Deslauriers R. Distinguishing normal allografts from biopsy-proven rejections: application of a three-stage classification strategy to urine MR and IR spectra. Vibrational Spectroscopy 28:(1) 97–102, 2002.

    Article  Google Scholar 

  25. Schulernd H. The influence of feature selection on error estimates in linear discriminant analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (2002, accepted).

    Google Scholar 

  26. Somorjai R.L, Janeliūnas A, Baumgartner R., Raudys S. Comparison of two classification methodologies on a real-world biomedical problem: A feature-extraction-based 3-stage strategy vs. the strategy of classifier complexity regularization and multiple classifier systems. Advances in Pattern Recognition (Proc. SPR+SSPR’2002, Duin R.P.W. and Kamel M., eds.). Springer, Lecture Notes in Computer Science, 2002.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Raudys, Š. (2002). Multiple Classification Systems in the Context of Feature Extraction and Selection. In: Roli, F., Kittler, J. (eds) Multiple Classifier Systems. MCS 2002. Lecture Notes in Computer Science, vol 2364. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45428-4_3

Download citation

  • DOI: https://doi.org/10.1007/3-540-45428-4_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43818-2

  • Online ISBN: 978-3-540-45428-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics