Skip to main content

Part of the book series: Studies in Computational Intelligence ((SCI,volume 126))

  • 875 Accesses

Summary

In pattern recognition, many learning methods need numbers as inputs. This paper analyzes two-level classifier ensembles to improve numeric learning methods on nominal data. A different classifier was used at each level. The classifier at the base level transforms the nominal inputs into continuous probabilities that the classifier at the meta level uses as inputs. An experimental validation is provided over 27 nominal datasets for enhancing a method that requires numerical inputs (e.g. Support Vector Machine, SVM). Cascading, Stacking and Grading are used as two-level ensemble implementations. Experiments combine these methods with another symbolic-to-numerical transformation — Value Difference Metric, VDM. The results suggest that Cascading with Binary Decision Trees at base level and SVM with VDM at meta level produces a better accuracy than other possible two-level configurations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Grabczewski K, Jankowski N (2003) Transformations of symbolic data for continuous data oriented models. In: Kaynak O, Alpaydin E, Oja E, Xu L (eds) Proc Artif Neural Networks and Neural Inf Proc, Istanbul, Turkey. Springer, Berlin/Heidelberg, pp 359–366

    Google Scholar 

  2. Stanfill C, Waltz D (1986) Toward memory-based reasoning. Communications of the ACM 29:1213–1229

    Article  Google Scholar 

  3. Duch W, Grudzinski K, Stawski G (2000) Symbolic features in neural networks. In: Rutkowski L, Cze¸stochowa R (eds) Proc the 5th Conf Neural Networks and Soft Computing, Zakopane, Poland, pp 180–185

    Google Scholar 

  4. Gama J, Brazdil P (2000) Cascade generalization. Mach Learn 41:315–343

    Article  MATH  Google Scholar 

  5. Maudes J, Rodríguez JJ, García-Osorio S (2007) Cascading for nominal data. In: Haindl M, Kittler J, Roli F (eds) Proc the 7th Int Workshop on Multiple Classifier Syst, Prague, Czech Republic. Springer, Berlin/Heidelberg, pp 231–240

    Google Scholar 

  6. Wolpert D (1992) Stacked generalization. Neural Networks 5:241–260

    Article  Google Scholar 

  7. Seewald AK, Fürnkranz J (2001) An evaluation of grading classifiers. In: Hoffmann F, Hand DJ, Adams NM, Fisher DH, Guimarães G (eds) Proc the 4th Int Conf Advances in Intell Data Analysis, Cascais, Portugal. Springer, Berlin/Heidelberg, pp 115–124

    Google Scholar 

  8. Witten IH, Frank E (2005) Data mining: practical machine learning tools and techniques. Morgan Kaufmann, San Francisco

    MATH  Google Scholar 

  9. Platt J (1999) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods, MIT Press, Cambridge, pp 185–208

    Google Scholar 

  10. Nadeau C, Bengio Y (2003) Inference for the generalization error. Mach Learn 52:239–281

    Article  MATH  Google Scholar 

  11. Blake CL, Merz CJ (1998) UCI Repository of machine learning databases (http://www.ics.uci.edu/~mlearn/MLRepository.html)

  12. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Research 7:1–30

    Google Scholar 

  13. Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann, San Francisco

    Google Scholar 

  14. Kohavi R, Wolpert D (1996) Bias plus variance decomposition for zero-one loss functions. In: Saitta L (ed) Proc the 13th Int Conf Mach Learn, Bari, Italy. Morgan Kaufmann, San Francisco, pp 275–283

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Maudes, J., Rodríguez, J.J., García-Osorio, C. (2008). Cascading with VDM and Binary Decision Trees for Nominal Data. In: Okun, O., Valentini, G. (eds) Supervised and Unsupervised Ensemble Methods and their Applications. Studies in Computational Intelligence, vol 126. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78981-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-78981-9_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-78980-2

  • Online ISBN: 978-3-540-78981-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics