Advertisement

Efficient Multi-method Rule Learning for Pattern Classification Machine Learning and Data Mining

  • Chinmay Maiti
  • Somnath Pal
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4815)

Abstract

The work presented here focuses on combining multiple classifiers to form single classifier for pattern classification, machine learning for expert system, and data mining tasks. The basis of the combination is that efficient concept learning is possible in many cases when the concepts learned from different approaches are combined to a more efficient concept. The experimental result of the algorithm, EMRL in a representative collection of different domain shows that it performs significantly better than the several state-of-the-art individual classifier, in case of 11 domains out of 25 data sets whereas the state-of-the-art individual classifier performs significantly better than EMRL only in 5 cases.

Keywords

Machine learning Multiple Classifiers Missing values Discretization Classification 

References

  1. 1.
    Ali, K.M., Pazzani, M.J.: Error Reduction through Learning Multiple Descriptions. Machine Learning 24(3), 173–202 (1996)Google Scholar
  2. 2.
    Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  3. 3.
    Bruzzon, L., Cossu, R.: A Robust Multiple Classifier System for a Partially Unsupervised Updating of Land-cover Maps. In: Kittler, J., Roli, F. (eds.) MCS 2001. LNCS, vol. 2096, pp. 259–268. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  4. 4.
    Candrowska, J.: PRISM: An algorithm for inducing modular rules. International Journal for Man-Machine Studies 27, 349–370 (1987)CrossRefGoogle Scholar
  5. 5.
    Clark, P., Boswell, R.: Rule Induction with CN2: Some recent improvements. In: Proc. of the 5th ECML, Berlin, pp. 151–163 (1991)Google Scholar
  6. 6.
    Clark, P., Niblett, T.: The CN2 Induction Algorithm. ML 3, 261–283 (1989)Google Scholar
  7. 7.
    Dietterich, T.G.: Machine Learning Research: Four Current Directions. AI Magazine 18(4), 97–136 (1997)Google Scholar
  8. 8.
    Domings, P.: Unification Instance-based and Rule-base Induction. Machine Learning 3, 139–168 (1996)Google Scholar
  9. 9.
    Freund, Y., Schapire, R.E.: Experiments with a New Boosting Algorithm. In: Proc. of the 13th International Conference on Machine Learning, pp. 146–148. Morgan Kaufmann, San Francisco (1996)Google Scholar
  10. 10.
    Grzymala-Busse, J.W., Grzymala-Busse, W.J.: Handling Missing Attributes Values. In: Maimon, O., Rokach, L. (eds.) Data Mining & Knowledge Discovery Hand Book, pp. 35–57. Springer, Heidelberg (2005)Google Scholar
  11. 11.
    Hansen, J.: Combining Predictors Meta Machine Learning Methods abd Bias Variance & Ambiguity Decompositions Ph. D. dissertation, Aurhns University (2000)Google Scholar
  12. 12.
    Merz, C.J.: Using Correspondence Analysis to Combine Classifiers. Machine Learning 36(1/2), 33–58 (1999)CrossRefGoogle Scholar
  13. 13.
    Mingers, J.: An Empirical Comparison of Selection Measure for Decision-tree Induction. Machine Learning 3, 319–342 (1989)Google Scholar
  14. 14.
    Murphy, P. M.: UCI Repository of Machine Learning Databases (Machine-readable data repository). Department of Information and Computer Science, Irvine, CA: University of California (1999)Google Scholar
  15. 15.
    Pal, S., Saha, T.K., Pal, G.K., Maiti, C., Ghosh, A.M.: Multi-method Decision Tree Learning for Data Mining. In: IEEE ACE 2002, pp. 238–242 (2002)Google Scholar
  16. 16.
    Pal, S., Maiti, C., Mazumdar, A.G., Ghosh, A.M.: Multi-Method Combination For Learning Search-Efficient And Accurate Decision Tree. In: 5-th Intenational conference on Advances in Pattern Recognition, ISI, Kolkata, pp. 141–147 (2003)Google Scholar
  17. 17.
    Pal, S., Maiti, C., Debnath, K., Ghosh, A.M.: SPID3: Discretization Using Pseudo Deletion. Journal of The Institute of Engineers 87, 25–31 (2006)Google Scholar
  18. 18.
    Quinlan, J.R.: Program for Machine Learning. Morgan Kaufmann, CA (1993)Google Scholar
  19. 19.
    Rokach, L.: Ensemble Methods for Classifiers. In: Maimon, O., Rokach, L. (eds.) The Data Mining and Knowledge Discovery Hand Book, pp. 956–980. Springer, Heidelberg (2005)Google Scholar
  20. 20.
    Sohn, S.Y., Choi, H.: Ensemble based on Data Envelopment Analysis. In: ECML Meta Learning Workshop (September 2004)Google Scholar
  21. 21.
    Tumer, K., Ghosh, J.: Linear and Order Statistics Combiners for Pattern Classification. In: Sharkey, A. (ed.) Combining Artificial Neural Nets, pp. 127–162. Springer, Heidelberg (1999)Google Scholar
  22. 22.
    Tumer, K., Ghosh, J.: Robust Order Statistics based Ensemble for Distributed Data Mining. In: Kargupto, H., Chan, P. (eds.) Advances in Distributed and Parallel Knowledge Discovery, pp. 185–210. AAAI / MIT Pess (2000)Google Scholar
  23. 23.
    Wolpert, D.: Stacked Generalization. Neural Networks 5(2), 241–260 (1992)CrossRefGoogle Scholar
  24. 24.
    Zighed, D.A., Rabaseda, S., Rakotomalala, R., Feschet, F.: Discretization methods in supervised learning. In: Ency. of Comp. Sc.& Tech. vol. 44, pp. 35–50. Marcel Dekker Inc (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Chinmay Maiti
    • 1
  • Somnath Pal
    • 2
  1. 1.Dept. of Info. Tech., Jadavpur University 
  2. 2.Dept. of Comp. Sc. & Tech., Bengal Engg. & Sc. University 

Personalised recommendations