Advertisement

Machine Learning

, Volume 13, Issue 2–3, pp 161–188 | Cite as

Using Genetic Algorithms for Concept Learning

  • Kenneth A. de Jong
  • William M. Spears
  • Diana F. Gordon
Article

Abstract

In this article, we explore the use of genetic algorithms (GAs) as a key element in the design and implementation of robust concept learning systems. We describe and evaluate a GA-based system called GABIL that continually learns and refines concept classification rules from its interaction with the environment. The use of GAs is motivated by recent studies showing the effects of various forms of bias built into different concept learning systems, resulting in systems that perform well on certain concept classes (generally, those well matched to the biases) and poorly on others. By incorporating a GA as the underlying adaptive search mechanism, we are able to construct a concept learning system that has a simple, unified architecture with several important features. First, the system is surprisingly robust even with minimal bias. Second, the system can be easily extended to incorporate traditional forms of bias found in other concept learning systems. Finally, the architecture of the system encourages explicit representation of such biases and, as a result, provides for an important additional feature: the ability to dynamically adjust system bias. The viability of this approach is illustrated by comparing the performance of GABIL with that of four other more traditional concept learners (AQ14, C4.5, ID5R, and IACL) on a variety of target concepts. We conclude with some observations about the merits of this approach and about possible extensions.

Concept learning genetic algorithms bias adjustment 

References

  1. Baeck, T., Hoffmeister, F., & Schwefel, H. (1991). A survey of evolution strategies. Proceedings of the Fourth International Conference on Genetic Algorithms (pp. 2–9). La Jolla, CA: Morgan Kaufmann.Google Scholar
  2. Booker, L. (1989). Triggered rule discovery in classifier systems. Proceedings of the Third International Conference on Genetic Algorithms (pp. 265–274). Fairfax, VA: Morgan Kaufmann.Google Scholar
  3. Davis, L. (1989). Adapting operator probabilities in genetic algorithms. Proceedings of the Third International Conference on Genetic Algorithms (pp. 61–69). Fairfax, VA: Morgan Kaufmann.Google Scholar
  4. De Jong, K. (1987). Using genetic algorithms to search program spaces. Proceedings of the Second International Conference on Genetic Algorithms (pp. 210–216). Cambridge, MA: Lawrence Erlbaum.Google Scholar
  5. De Jong, K., & Spears, W. (1989). Using genetic algorithms to solve NP-complete problems. Proceedings of the Third International Conference on Genetic Algorithms (pp. 124–132). Fairfax, VA: Morgan Kaufmann.Google Scholar
  6. De Jong, K., & Spears, W. (1991). Learning concept classification rules using genetic algorithms. Proceedings of the Twelfth International Joint Conference on Artificial Intelligence (pp. 651–656). Sydney, Australia: Morgan Kaufmann.Google Scholar
  7. Goldberg, D. (1989). Genetic algorithms in search, optimization, and machine learning. New York: Addison-Wesley.Google Scholar
  8. Gordon, D. (1990). Active bias adjustment for incremental, supervised concept learning. Doctoral dissertation, Computer Science Department, University of Maryland, College Park, MD.Google Scholar
  9. Greene, D., & Smith, S. (1987). A genetic system for learning models of consumer choice. Proceedings of the Second International Conference on Genetic Algorithms (pp. 217–223). Cambridge, MA: Lawrence Erlbaum.Google Scholar
  10. Grefenstette, John J. (1986). Optimization of control parameters for genetic algorithms. IEEE Transactions on Systems, Man, and Cybernetics, SMC-16 (1), 122–128.Google Scholar
  11. Grefenstette, John J. (1989). A system for learning control strategies with genetic algorithms. Proceedings of the Third International Conference on Genetic Algorithms (pp. 183–190). Fairfax, VA: Morgan Kaufmann.Google Scholar
  12. Holder, L. (1990). The general utility problem in machine learning. Proceedings of the Seventh International Conference on Machine Learning (pp. 402–410). Austin, TX: Morgan Kaufmann.Google Scholar
  13. Holland, J. (1975). Adaptation in natural and artificial systems. Ann Arbor, MI: The University of Michigan Press.Google Scholar
  14. Holland, J. (1986). Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems. In R. Michalski, J. Carbonell, & T. Mitchell (Eds.), Machine learning: An artificial intelligence approach. Los Altos, CA: Morgan Kaufmann.Google Scholar
  15. Iba, G. (1979). Learning disjunctive concepts from examples (A.I. Memo 548). Cambridge, MA: Massachusetts Institute of Technology.Google Scholar
  16. Janikow, C. (1991). Inductive learning of decision rules from attribute-based examples: A knowledge-intensive genetic algorithm approach (TR91-030). Chapel Hill, NC: The University of North Carolina at Chapel Hill, Department of Computer Science.Google Scholar
  17. Koza, J.R. (1991). Concept formation and decision tree induction using the genetic programming paradigm. In H.P. Schwefel & R. Maenner (Eds.), Parallel problem solving from nature. Berlin: Springer-Verlag.Google Scholar
  18. Michalski, R. (1983). A theory and methodology of inductive learning. In R. Michalski, J. Carbonell, & T. Mitchell (Eds.), Machine learning: An artificial intelligence approach. Palo Alto: Tioga.Google Scholar
  19. Michalski, R. (1990). Learning flexible concepts: Fundamental ideas and a method based on two-tiered representation. In Y. Kodratoff & R. Michalski (Eds.), Machine learning: An artificial intelligence approach. San Mateo, CA: Morgan Kaufmann.Google Scholar
  20. Michalski, R., Mozetic, L., Hong, J., & Lavrac, N. (1986). The AQ15 inductive learning system: An overview and experiments (Technical Report Number UIUCDCS-R-86-1260). Urbana-Champaign, IL: University of Illinois.Google Scholar
  21. Mozetic, I. (1985). NEWGEM: Program for learning from examples, program documentations and user's guide (Report Number UIUCDCS-F-85-949). Urbana-Champaign, IL: University of Illinois.Google Scholar
  22. Provost, F. (1991). Navigation of an extended bias space for inductive learning. Ph.D. thesis proposal, Computer Science Department, University of Pittsburgh, Pittsburgh, PA.Google Scholar
  23. Quinlan, J. (1986). Induction of decision trees. Machine Learning, 1 (1), 81–106.PubMedGoogle Scholar
  24. Quinlan, J. (1989). Documentation and user's guide for C4.5. (unpublished).Google Scholar
  25. Rendell, L. (1985). Genetic plans and the probabilistic learning system: Synthesis and results. Proceedings of the First International Conference on Genetic Algorithms (pp. 60–73). Pittsburgh, PA: Lawrence Erlbaum.Google Scholar
  26. Rendell, L., Seshu, R., & Tcheng, D. (1987). More robust concept learning using dynamically-variable bias. Proceedings of the Fourth International Workshop on Machine Learning (pp. 66–78). Irvine, CA: Morgan Kaufmann.Google Scholar
  27. Schaffer, J. David, & Morishima, A. (1987). An adaptive crossover distribution mechanism for genetic algorithms. Proceedings of the Second International Conference on Genetic Algorithms (pp. 36–40). Cambridge, MA: Lawrence Erlbaum.Google Scholar
  28. Smith, S. (1983). Flexible learning of problem solving heuristics through adaptive search. Proceedings of the Eighth International Joint Conference on Artificial Intelligence (pp. 422–425). Karlsruhe, Germany: William Kaufmann.Google Scholar
  29. Tcheng, D., Lambert, B., Lu, S., & Rendell, R. (1989). Building robust learning systems by combining induction and optimization. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (pp. 806–812). Detroit, MI: Morgan Kaufmann.Google Scholar
  30. Wilson, S. (1987). Quasi-Darwinian learning in a classifier system. Proceedings of the Fourth International Workshop on Machine Learning (pp. 59–65). Irvine, CA: Morgan Kaufmann.Google Scholar
  31. Utgoff, P. (1988). ID5R: An incremental ID3. Proceedings of the Fifth International Conference on Machine Learning (pp. 107–120). Ann Arbor, MI: Morgan Kaufmann.Google Scholar

Copyright information

© Kluwer Academic Publishers 1993

Authors and Affiliations

  • Kenneth A. de Jong
    • 1
  • William M. Spears
    • 2
  • Diana F. Gordon
  1. 1.Computer Science DepartmentGeorge Mason UniversityFairfax
  2. 2.Naval Research LaboratoryWashington

Personalised recommendations