Machine Learning

, Volume 38, Issue 1–2, pp 9–40

LEARNABLE EVOLUTION MODEL: Evolutionary Processes Guided by Machine Learning

  • Ryszard S. Michalski
Article

Abstract

A new class of evolutionary computation processes is presented, called Learnable Evolution Model or LEM. In contrast to Darwinian-type evolution that relies on mutation, recombination, and selection operators, LEM employs machine learning to generate new populations. Specifically, in Machine Learning mode, a learning system seeks reasons why certain individuals in a population (or a collection of past populations) are superior to others in performing a designated class of tasks. These reasons, expressed as inductive hypotheses, are used to generate new populations. A remarkable property of LEM is that it is capable of quantum leaps (“insight jumps”) of the fitness function, unlike Darwinian-type evolution that typically proceeds through numerous slight improvements. In our early experimental studies, LEM significantly outperformed evolutionary computation methods used in the experiments, sometimes achieving speed-ups of two or more orders of magnitude in terms of the number of evolutionary steps. LEM has a potential for a wide range of applications, in particular, in such domains as complex optimization or search problems, engineering design, drug design, evolvable hardware, software engineering, economics, data mining, and automatic programming.

multistrategy learning genetic algorithms evolution model evolutionary computation 

References

  1. Ackley, D. & Littman, M. (1992). Interactions between learning and evolution, In C. G. Langton, C. Taylor, J. D. Farmer, & S. Rasmussen (Eds), Artificial life II. Addison-Wesley.Google Scholar
  2. Augier, S., Venturini, G., & Kodratoff, Y. (1995). Learning first order logic rules with a genetic algorithm. Proceedings of the 1st International Conference on Knowledge Discovery and Data Mining (pp. 21–26). Montreal, Canada: AAAI Press.Google Scholar
  3. Baeck, T., Fogel, D. B., & Michalewicz, Z. (1997). (Eds.), Handbook of evolutionary computation. Oxford University Press.Google Scholar
  4. Banzhaf, W., Nordin P., Keller R. E., & Francone, F. D. (1998). Genetic programming: An introduction, San Francisco, CA: Morgan Kaufman Publishers, Inc.Google Scholar
  5. Baldwin, J. M. (1896). A new factor in evolution. American naturalist (Vol. 30) (pp. 441–451, 536–553).Google Scholar
  6. Bloedorn, E. & Michalski, R. S. (1998). Data-driven constructive induction: A methodology and its applications, IEEE Intelligent Systems, special issue on feature transformation and subset selection. Huan Liu & Hiroshi Motoda (Eds.), March-April.Google Scholar
  7. Cervone, G. & Michalski, R. S. (to appear). Design and experiments with LEM2 implementation of the Learnable Evolution Model, Reports of The Machine Learning and Inference Laboratory, George Mason University.Google Scholar
  8. Clark, P. & Niblett, R. (1989). The CN2 induction algorithm, Machine Learning, 3.Google Scholar
  9. Cohen, W. W. (1995). Fast effective rule induction. Proceedings of the Twelfth International Conference on Machine Learning.Google Scholar
  10. Coletti, M., Lash, T., Mandsager, C. Michalski, R. S., & Moustafa, R. (1999). Comparing the learnable evolution model with genetic algorithms in the area of digital filter design, Reports of The Machine Learning and Inference Laboratory, George Mason University, MLI 99–5.Google Scholar
  11. Darwin, C. (1859). On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life, London, John Murray.Google Scholar
  12. de Garis, Hugo. (1996). CAM-BRAIN: The evolutionary engineering of a billion neuron artificial brain by 2001 which grows/evolve at electronic speeds inside a cellular automata machine (CAM). Lecture notes in computer science, Vol. 1062. Towards evolvable hardware (pp 76–98). Springer-Verlag.Google Scholar
  13. de Garis, H., Korkin, M., Gers, F., Nawa, E., & Hough, M. (to appear). Building an Artificial Brain Using an FPGA Based CAM-Brain Machine, Applied Mathematics and Computation Journal (special issue on Artificial life and robotics, artificial brain, brain computing and brainware, North Holland.Google Scholar
  14. De Jong, K. A. (1975). An analysis of the behavior of a class of genetic adaptive systems. Ph.D. Thesis, Department of Computer and Communication Sciences, University of Michigan, Ann Arbor.Google Scholar
  15. De Jong, K. A., Spears, W. M., & Gordon, F. D. (1993). Using genetic algorithms for concept learning, Machine Learning, 13, 161–188.Google Scholar
  16. De Jong, K. A. (to appear). Evolutionary computation: theory and practice. MIT Press.Google Scholar
  17. Dietterich, T. G. (1997). Machine-learning research: four current directions, AI Magazine, 18(4).Google Scholar
  18. Esposito, F., Michalski, R. S., & Saitta, L. (Eds.) (1998). Proceedings of the Fourth International Workshop on Multistrategy Learning, Desenzano del Garda.Google Scholar
  19. Forsburg, S. (1976).AQPLUS: An adaptive random search method for selecting a best set of attributes from a large collection of candidates, Internal Technical Report, Department of Computer Science, University of Illinois, Urbana.Google Scholar
  20. Giordana A. & Neri, F. (1995). Search-intensive concept induction. Evolutionary Computation, 3(4), 375–416.Google Scholar
  21. Goldberg, D. E. (1989). Genetic algorithms in search, optimization and machine learning. Addison-Wesley.Google Scholar
  22. Grefenstette, J. (1991). Lamarckian learning in multi-agent environment. In R. Belew & L. Booker (Eds.). Proceedings of the Fourth International Conference on Genetic Algorithms, San Mateo, GA: Morgan Kaufmann (pp. 303–310).Google Scholar
  23. Greene D. P. & Smith, S. F. (1993). Competition-based induction of decision models from examples. Machine Learning, 13, 229–257.Google Scholar
  24. Hekanaho, J. (1997). GA-based rule enhancement concept learning. Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining (pp. 183–186). Newport Beach, CA: AAAI Press.Google Scholar
  25. Hekanaho, J. (1998). DOGMA: A GA-based relational learner, TUCS Technical Reports Series, Report No. 168.Google Scholar
  26. Hinton, G. E. & Nowlan, S. J. (1987). How learning can guide evolution. Complex Systems, 1, 495–502.Google Scholar
  27. Holland, J. (1975). Adaptation in natural and artificial systems. Ann Arbor: The University of Michigan Press.Google Scholar
  28. Janikow, C. Z. (1993). A knowledge-intensive genetic algorithm for supervised learning. Machine Learning, 13, 189–228.Google Scholar
  29. Kaufman, K. & Michalski, R. S. (1999). Learning from inconsistent and noisy data: the AQ18 approach, Foundations of Intelligent Systems, 11th International Symposium, ISMIS'99, Warsaw, Poland, Spring.Google Scholar
  30. Koza, J. R. (1994). Genetic programming II: automatic discovery of reusable programs. The MIT Press.Google Scholar
  31. Lavrac, N. & Dzeroski, S. (1994). Inductive logic programming: techniques and applications. Ellis Horwood.Google Scholar
  32. Maloof, M. & Michalski, R. S. (1999). AQ-PM: a system for partial memory learning. Proceedings of Intelligent Information Systems VIII, Ustron, Poland.Google Scholar
  33. Michalewicz, Z. (1996). Genetic algorithms +; data structures =; evolution programs, 3rd edn. Springer Verlag.Google Scholar
  34. Michalski, R. S. (1969). On the quasi-minimal solution of the general covering problem, Proceedings of the V International Symposium on Information Processing (FCIP 69), Yugoslavia, Bled (Vol. A3) Switching Circuits, (pp. 125–128).Google Scholar
  35. Michalski, R. S. (1978). A planar geometrical model for representing multi-dimensional discrete spaces and multiple-valued logic functions. Reports of the Department of Computer Science, No. 897, University of Illinois at Champaign-Urbana.Google Scholar
  36. Michalski, R. S. (1973). Discovering classification rules using variable-valued logic system VL1. Proceedings of the Third International Joint Conference on Artificial Intelligence, Stanford, CA (pp. 162–172).Google Scholar
  37. Michalski, R. S. (1983). A theory and methodology of inductive learning, Artificial Intelligence, 20(2) 111–161.Google Scholar
  38. Michalski, R. S. (1994). Inferential theory of learning: developing foundations for multistrategy learning, In R. S. Michalski & G. Tecuci (Eds.), Machine learning: a multistrategy approach (Vol. IV) San Mateo, CA, Morgan Kaufmann.Google Scholar
  39. Michalski, R. S. (1998). Learnable evolution: combining symbolic and evolutionary learning. Proceedings of the 4th International Workshop on Multistrategy Learning, Decenzano del Garda, Italy.Google Scholar
  40. Michalski, R. S. (to appear). Natural induction: theory, methodology and its application to machine learning and data mining. Reports of the Machine Learning and Inference Laboratory, George Mason University.Google Scholar
  41. Michalski, R. S.& Cervone, G. (to appear). Adaptive anchoring quantization of continuous variables: the ANCHOR method. Reports of the Machine Learning and Inference Laboratory, George Mason University.Google Scholar
  42. Michalski, R. S., Bratko, I., & Kubat, M. (1988). Machine learning and data mining: methods and applications. John Wiley and Sons.Google Scholar
  43. Michalski, R. S. & McCormick, B. H. (1971). Interval generalization of switching theory. Proceedings of the Third Annual Houston Conference on Computer and System Science, Houston, Texas.Google Scholar
  44. Michalski, R. S., Mozetic, I., Hong, J., & Lavrac, N. (1986). The AQ15 inductive learning system: an overview and experiments. Reports of the Intelligent Systems Group, No. 86–20, UIUCDCS-R–86–1260, Department of Computer Science, University of Illinois, Urbana.Google Scholar
  45. Michalski, R. S. & Zhang, Q. (1999). Initial experiments with the LEM1 learnable evolution model: an application to function optimization and evolvable hardware. Reports of the Machine Learning and Inference Laboratory, George Mason University.Google Scholar
  46. Mitchell, M. (1996). An introduction to genetic algorithms, Cambridge, MA: MIT Press.Google Scholar
  47. Mitchell, T. M. (1997). Does machine learning really work. AI Magazine, 18(3).Google Scholar
  48. Ravise, C. & Sebag, M. (1996). An advanced evolution should not repeat its past errors. In L. Saitta (Ed.), Proceedings of the 13th International Conference on Machine Learning (pp. 400–408).Google Scholar
  49. Sebag, M. & Schoenauer, M. (1994). Controlling crossover through inductive learning In Y. Davidor, H. P. Schwefel & R. Manner (Eds.), Proceedings of the 3rd Conference on Parallel Problem Solving from Nature, LNVS (Vol. 866) (pp. 209–218). Springer-Verlag.Google Scholar
  50. Sebag, M., Schoenauer M., & Ravise, C. (1997a). Inductive Learning of multation step-size in evolutionary paramter optimization. Proceedings of the 6th Annual Conference on Evolutionary Programming, Indianapolis. (pp. 247–261). LNCS (Vol. 1213).Google Scholar
  51. Sebag, M., Shoenauer, M., & Ravise, C. (1997b). Toward civilized evolution: developing inhibitions, Proceedings of the 7th International Conference on Genetic Algorithms (pp. 291–298).Google Scholar
  52. Turney, P. D. (1995). Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm. Journal of Artificial Intelligence Research, 2, 369–409.Google Scholar
  53. Vafaie, H. & De Jong, K. A. (1991). Improving the performance of a rule induction system using genetic algorithms, Proceedings of the First InternationalWorkshop on Multistrategy Learning,WV: MSL-91, Harpers Ferry.Google Scholar
  54. Vafaie, H. & De Jong, K. A. (1992). Genetic algorithms as a tool for feature selection in machine learning. Proceedings of the 4th International Conference on Tools with Artificial Intelligence, Arlington, VA.Google Scholar
  55. Wnek, J., Kaufman, K., Bloedorn, E., & Michalski, R. S. (1995). Inductive learning systemAQ15c: the method and user's guide, Reports of the Machine Learning and Inference Laboratory, MLI 95–4, George Mason University, Fairfax, VA.Google Scholar
  56. Yao, L. & Sethares, W. (1994). Nonlinear parameter estimation via the genetic algorithm. IEEE Transactions on Signal Processing, 42(4), 927–935.Google Scholar
  57. Zhang, Q. (1997). Knowledge visualizer: a software system for visualizing data, patterns and their relationships. Reports of the Machine Learning and Inference Laboratory, MLI 97–14, George Mason University, Fairfax, VA.Google Scholar

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • Ryszard S. Michalski
    • 1
    • 2
  1. 1.Machine Learning and Inference LaboratoryGeorge Mason UniversityFairfax
  2. 2.Institute of Computer SciencePolish Academy of SciencesWarsawPoland

Personalised recommendations