Advertisement

ADX Algorithm for Supervised Classification

  • Michał Dramiński
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 605)

Abstract

In this paper, a final version of the rule based classifier (ADX) is presented. ADX is an algorithm for inductive learning and for later classification of objects. As is typical for rule systems, knowledge representation is easy to understand by a human. The advantage of ADX algorithm is that rules are not too complicated and for most real datasets learning time increases linearly with the size of a dataset. The novel elements in this work are the following: a new method for selection of the final ruleset in ADX and the classification mechanism. The algorithm’s performance is illustrated by a series of experiments performed on a suitably designed set of artificial data.

Keywords

Multivariate Adaptive Regression Spline Prediction Ability Decision Attribute Strong Rule Negative Coverage 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Agrawal R, Srikant R (1994) Fast algorithms for mining association rules. In: Proceedings of the 20th international conference on very large databases, Santiago, ChileGoogle Scholar
  2. 2.
    Alizadeh AA, Eisen MB, Davis RE, Ma C, Lossos IS, Rosenwald A, Boldrick JC, Sabet H, Tran T, Yu X, Powell JI, Yang L, Marti GE, Moore T, Hudso J Jr, Lu L, Lewis DB, Tibshirani R, Sherlock G, Chan WC, Greiner TC, Weisenburger DD, Armitage JO, Warnke R, Levy R, Wilson W, Grever MR, Byrd JC, Botstein D, Brown PO, Staudt LM (2000) Distinct types of diffuse large B-cell Lymphoma identified by expression profiling. Nature 403:503–511CrossRefGoogle Scholar
  3. 3.
    Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth International Group, MontereyGoogle Scholar
  4. 4.
    Clarc P, Niblett T (1989) The CN2 induction algorithm. Mach. Learn. 3:261–283Google Scholar
  5. 5.
    Cohen W (1995) Fast effective rule induction. In: Machine learning: proceedings of the twelfth international conference, Lake Tahoe, CaliforniaGoogle Scholar
  6. 6.
    Cover TM, Hart PE (1967) Nearest neighbor pattern classification. IEEE Trans. Inform. Theory, IT-13(1):2127Google Scholar
  7. 7.
    Dramiński M (2004) ADX Algorithm: a brief description of a rule based classifier. In: Proceedings of the new trends in intelligent information processing and web mining IIS’2004 symposium. Springer, Zakopane, PolandGoogle Scholar
  8. 8.
    Dramiński M (2005) Description and practical application of rule based classifier ADX, proceedings of the first Warsaw International Seminar on intelligent systems, WISIS (2004). In: Dramiski M, Grzegorzewski P, Trojanowski K, Zadrozny S (eds) Issues in intelligent systems. Models and techniques. ISBN 83-87674-91-5, Exit 2005Google Scholar
  9. 9.
    Fayyad UM, Irani KB (1992) On the handling of continuous-valued attributes in decision tree generation. Mach. Learn. 8:87–102zbMATHGoogle Scholar
  10. 10.
    Fix E, Hodges JL (1951) Discriminatory analysis nonparametric discrimination: Consistency properties, Project 21–49-004, Report no. 4, USAF School of Aviation Medicine, Randolph Field, pp 261–279Google Scholar
  11. 11.
    Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh M, Downing JR, Caligiuri MA, Bloomfield CD, Lander ES (1999) Molecular classification of Cancer: class discovery and class prediction by gene expression monitoring. Science 286:531–537CrossRefGoogle Scholar
  12. 12.
    Grzymala-Busse JW (2003) MLEM2-Discretization during rule induction, intelligent information processing and web mining. In: Proceedings of the international IIS:IIPWM’03 conference held in Zakopane, PolandGoogle Scholar
  13. 13.
    Hastie T, Tibshirani R, Friedman J (2001) Elements of statistical learning: data mining, inference and prediction. Springer, New YorkCrossRefGoogle Scholar
  14. 14.
    Kaufman KA, Michalski RS (1999) Learning from inconsistent and noisy data: the AQ18 approach. In: Proceedings of the eleventh international symposium on methodologies for intelligent systems (ISMIS’99), Warsaw, pp 411–419Google Scholar
  15. 15.
    Michalski RS, Kaufman KA (2001) The AQ19 system for machine learning and pattern discovery: a general description and user’s guide, reports of the machine learning and inference laboratory, MLI 01–2. George Mason University, FairfaxGoogle Scholar
  16. 16.
    Newman DJ, Hettich S, Blake CL, Merz CJ (1998) UCI Repository of machine learning databases. University of California, Department of Information and Computer Science, Irvine, http://www.ics.uci.edu/mlearn/MLRepository.html
  17. 17.
    Pawlak Z (1991) Rough sets: theoretical aspects of reasoning about data. Kluwer Academic Publishing, DordrechtCrossRefzbMATHGoogle Scholar
  18. 18.
    Quinlan JR (1993) C4.5: Programs for machine learning. Morgan KaufmannGoogle Scholar
  19. 19.
    Quinlan JR (1986) Induction of decision trees. Machine learning 1:81–106Google Scholar
  20. 20.
    Stefanowski J (1998) On rough set based approaches to induction of decision rules. In: Polkowski L, Skowron A (eds) Rough sets in data mining and knowledge discovery, Physica-Verlag, pp 500–529Google Scholar
  21. 21.
    The R project for statistical computing. http://www.r-project.org/
  22. 22.
    Wittenn IH, Eibe F (2005) Weka 3: data mining software in Java, data mining: practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San FranciscoGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Institute of Computer Science, Polish Academy of SciencesWarsawPoland

Personalised recommendations