Advertisement

Cooperative Coevolutionary Methods

Chapter
  • 698 Downloads
Part of the Operations Research/Computer Science Interfaces Series book series (ORCS, volume 36)

Abstract

This chapter presents a cooperative revolutionary model for evolving artificial neural networks. This model is based on the idea of coevolving subnetworks that must cooperate to form a solution for a specific problem, instead of evolving complete networks. The combination of these subnetworks is part of a coevolutionary process. The best combinations of subnetworks must be evolved together with the coevolution of the subnetworks. Several subpopulations of subnetworks coevolve cooperatively and genetically isolated. The individuals of every subpopulation are combined to form whole networks. This is a different approach from most current models of evolutionary neural networks which try to develop whole networks. This model places as few restrictions as possible over the network structure, allowing the model to reach a wide variety of architectures during the evolution and to be easily extensible to other kind of neural networks. The performance of the model in solving ten real problems of classification is compared with a modular network, the adaptive mixture of experts, and with the results reported in the literature.

Key words

Neural network automatic design cooperative coevolution evolutionary computation genetic algorithms evolutionary programming 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Angeline, P. J., Saunders, G. M., and Pollack, J. B., 1994, An evolutionary algorithm that constructs recurrent neural networks, IEEE Transactions on Neural Networks 5(l):54–65.CrossRefGoogle Scholar
  2. Belew, R. K., Mclnerney, J., and Schraudolph, N. N., 1991, Evolving networks: Using genetic algorithms with connectionist learning, Tech. Rep. CS90-174, Computer Science Engineering Department, University of California-San Diego.Google Scholar
  3. Borst, M. V., 1994, Local Structure Optimization in Evolutionary Generated Neural Network Architectures, Ph.D. Thesis, Leiden University, The Netherlands.Google Scholar
  4. Breiman, L., 2000, Randomizing outputs to increase prediction accuracy, Machine Learning 40:229–242.zbMATHCrossRefGoogle Scholar
  5. Caelli, T., Guan, L., and Wen, W., 1999, Modularity in neural computing, Proceedings of the IEEE 87(9):1497–1518.CrossRefGoogle Scholar
  6. Cantú-Paz, E., and Kamath, C, 2003, Inducing oblique decision trees with evolutionary algorithms, IEEE Transactions on Evolutionary Computation 7(l):54–68.CrossRefGoogle Scholar
  7. Chellapilla, K., and Fogel, D. B., 1999, Evolving neural networks to play checkers without relying on expert knowledge, IEEE Transactions on Neural Networks 10(6):1382–1391.CrossRefGoogle Scholar
  8. Cho, S-B., and Shimohara, K., 1998, Evolutionary learning of modular neural networks with genetic programming, Applied Intelligence 9:191–200.CrossRefGoogle Scholar
  9. Depenau, J., and Moller, M., 1994, Aspects of generalization and pruning, in: Proc. World Congress on Neural Networks, vol. III, pp. 504–509.Google Scholar
  10. Dietterich, T. G., 2000, An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization, Machine Learning 40:139–157.CrossRefGoogle Scholar
  11. Dzeroski, S., and Zenko, B., 2004, Is combining classifiers with stacking better than selecting the best one?, Machine Learning 54:255–273.zbMATHCrossRefGoogle Scholar
  12. Finnoff, W., Hergert, F., and Zimmermann, H. G., 1993, Improving model selection by nonconvergent methods, Neural Networks 6:771–783.CrossRefGoogle Scholar
  13. Fogel, D. B., 1992, Evolving Artificial Intelligence, Ph.D. thesis, University of California, San Diego.Google Scholar
  14. Friedman, J., Hastie, T., and Tibshirani, R., 2000, Additive logistic regression: A statistical view of boosting, Annals of Statistics 28(2), pp. 337–407.zbMATHMathSciNetCrossRefGoogle Scholar
  15. Gallant, S., 1993, Neural-Network Learning and Expert Systems, MIT Press, Cambridge, MA.zbMATHGoogle Scholar
  16. García-Pedrajas, N., Hervás-Martínez, C, and Muñoz-Pérez, J., 2003, Covnet: A cooperative coevolutionary model for evolving artificial neural networks, IEEE Transactions on Neural Networks 14(3):575–596.CrossRefGoogle Scholar
  17. Goldberg, D. E., 1989a, Genetic algorithms and Walsh functions: Part 1, a gentle introduction, Complex Systems 3:129–152.zbMATHMathSciNetGoogle Scholar
  18. Goldberg, D. E., 1989b, Genetic algorithms and Walsh functions: Part 2, deception and its analysis, Complex Systems 3:153–171.zbMATHMathSciNetGoogle Scholar
  19. Goldberg, D. E., 1989c, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, MA.Google Scholar
  20. Goldberg, D., and Deb, K., 1991, A comparative analysis of selection schemes used in genetic algorithms, in: Foundations of Genetic Algorithms, G. Rawlins, ed., Morgan Kaufmann, pp. 94–101.Google Scholar
  21. Hancock, P. J. B., 1992, Genetic algorithms and permutation problems: A comparison of recombination operators for neural net structure specification, in: Proc. Int. Workshop on Combinations of Genetic Algorithms and Neural Networks (COGANN-92), D. Whitley and J. D. Schaffer, eds., Los Alamitos, CA, IEEE Computer Soc. Press, pp. 108–122.CrossRefGoogle Scholar
  22. Hassibi, B., and Stork, D., 1993, Second order derivatives for network pruning: Optimal brain surgeon, in Advances in Neural Information Systems 5:164–171.Google Scholar
  23. Haykin, S., 1994, Neural Networks—A Comprehensive Foundation, Macmillan College Publishing Company, New York, NY.Google Scholar
  24. Hettich, S., Blake, C. L., and Merz, C. J., 1998, UCI repository of machine learning databases, http://www.ics.uci.edu/~mlearn/MLRepository.html.Google Scholar
  25. Hirose, Y., Yamashita, K., and Hijiya, S., 1991, Backpropagation algorithm which varies the number of hidden units, Neural Networks 4:61–66.CrossRefGoogle Scholar
  26. Honavar, V., and Uhr, V. L., 1993, Generative learning structures for generalized connectionist networks, Information Science 70(l):75–108.CrossRefGoogle Scholar
  27. Islam, Md. M., Yao, X., and Murase, K., 2003, A constructive algorithm for training cooperative neural network ensembles, IEEE Transactions on Neural Networks 14(4):820–834.CrossRefGoogle Scholar
  28. Jacobs, R. A., Jordan, M. I., Nowlan, S. J., and Hinton, G. E., 1991, Adaptive mixtures of local experts, Neural Computation 3:79–87.CrossRefGoogle Scholar
  29. Kamimura, R., and Nakanishi, S., 1994, Weight-decay as a process of redundancy reduction, in: Proceedings of World Congress on Neural Networks, vol. III, pp. 486–489.Google Scholar
  30. Kirkpatrick, S., Gelatt Jr, C. D., and Vecchi, M. P., 1983, Optimization by simulated annealing, Science 220:671–680.MathSciNetADSCrossRefGoogle Scholar
  31. Le Cun, Y., Denker, J. S., and Solla, S. A., 1990, Optimal brain damage, in: Advances in Neural Information Processing (2), D. S. Touretzky, ed., Denver, CO, pp. 598–605.Google Scholar
  32. Lin, Ch-T., and Jou, Ch-P., 1999, Controlling chaos by GA-based reinforcement learning neural network, IEEE Transactions on Neural Networks 10(4):846–859.CrossRefGoogle Scholar
  33. Liu, Y., and Yao, X., 1999, Ensemble learning via negative correlation, Neural Networks 12(10): 1399–1404.PubMedCrossRefGoogle Scholar
  34. Liu, Y., Yao, X., and Higuchi, T., 2000, Evolutionary ensembles with negative correlation learning, IEEE Transactions on Evolutionary Computation 4(4):380–387.zbMATHCrossRefGoogle Scholar
  35. Liu, Y., Yao, X., Zhao, Q., and Higuchi, T., 2001, Evolving a cooperative population of neural networks by minimizing mutual information, in: Proc. of the 2001 IEEE Congress on Evolutionary Computation, Seoul, Korea, pp. 384–389.Google Scholar
  36. Maniezzo, V., 1994, Genetic evolution of the topology and weight distribution of neural networks, IEEE Transactions on Neural Networks 5(l):39–53.CrossRefGoogle Scholar
  37. Merz, C. J., 1999, Using correspondence analysis to combine classifiers, Machine Learning 36(l):33–58.CrossRefGoogle Scholar
  38. Michalewicz, Z., 1994, Genetic Algorithms + Data Structures’ Evolution Programs, Springer-Verlag, New York.Google Scholar
  39. Miller, G. F., Todd, P. M., and Hedge, S. U., 1991, Designing neural networks, Neural Networks 4:53–60.CrossRefGoogle Scholar
  40. Moriarty, D. E., 1997, Symbiotic Evolution of Neural Networks in Sequential Decision Tasks, Ph.D. thesis, University of Texas at Austin, Tech. Rep. AI97-257.Google Scholar
  41. Moriarty, D. E., and Miikkulainen, R., 1996, Efficient reinforcement learning through symbiotic evolution, Machine Learning 22:11–32.Google Scholar
  42. Mozer, M. C, and Smolensky, P., 1989, Skeletonization: A technique for trimming the fat from a network via relevance assessment, in: Advances in Neural Information Processing (1), D. S. Touretzky, Ed., Denver, CO, pp. 107–155.Google Scholar
  43. NeuralWare, 1993, Neural Computing: A Technology Handbook for Professional II/Plus, Neural Ware Inc., Pittsburgh, PA.Google Scholar
  44. Odri, S. V., Petrovacki, D. P., and Krstonosic, G. A., 1993, Evolutional development of a multilevel neural network, Neural Networks 6:583–595.CrossRefGoogle Scholar
  45. Opitz, D. W., and Shavlik, J. W., 1996, Actively searching for an effective neural network ensemble, Connection Science 8(3):337–353.CrossRefGoogle Scholar
  46. Parekh, R., Yang, J., and Honavar, V., 2000, Constructive neural-network learning algorithms for pattern classification, IEEE Transactions on Neural Networks 11(2):436–450.CrossRefGoogle Scholar
  47. Potter, M. A., 1997, The Design and Analysis of a Computational Model of Cooperative Coevolution, Ph.D. Thesis, Goerge Mason University, Fairfax, Virginia.Google Scholar
  48. Potter, M. A., and de Jong, K. A., 2000, Cooperative coevolution: An architecture for evolving coadapted subcomponents, Evolutionary Computation 8(1): 1–29.PubMedCrossRefGoogle Scholar
  49. Reed, R., 1993, Pruning algorithms — A survey, IEEE Transactions on Neural Networks 4:740–747.CrossRefGoogle Scholar
  50. Rosen, B. E., 1996, Ensemble learning using decorrelated neural networks, Connection Science 8(3):373–384.CrossRefGoogle Scholar
  51. Rumelhart, D., Hinton, G., and Williams, R. J., 1986, Learning internal representations by error propagation, in: Parallel Distributed Processing, D. Rumelhart and J. McClelland, eds., MIT Press, Cambridge, MA, pp. 318–362.Google Scholar
  52. Samuel, A. L., 1959, Some studies in machine learning using the game of checkers, Journal of Research and Development 3(3):210–229.Google Scholar
  53. Schaffer, J. D., Whitley, L. D., and Eshelman, L. J., 1992, Combinations of genetic algorithms and neural networks: A survey of the state of the art, in: Proc. Int. Workshop on Combinations of Genetic Algorithms and Neural Networks (COGANN-92), D. Whitley and J. D. Schaffer, eds., Los Alamitos, CA, pp. 1–37, IEEE Computer Soc. Press.Google Scholar
  54. Shang, Y., and Wah, B. W., 1996, Global optimization for neural networks training, IEEE Computer 29(3):45–54.Google Scholar
  55. Smalz, R., and Conrad, M., 1994, Combining evolution with credit apportionment: A new learning algorithm for neural nets, Neural Networks 7(2):341–351.CrossRefGoogle Scholar
  56. Thodberg, H. H., 1991, Improving generalization of neural networks through pruning, International Journal of Neural Systems 1(4):317–326.CrossRefGoogle Scholar
  57. Todorovski, L., and Dzeroski, S., 2003, Combining classifiers with meta decision trees, Machine Learning 50:223–249.zbMATHCrossRefGoogle Scholar
  58. van Rooij, A. J. F., Jain, L. C, and Johnson, R. P., 1996, Neural Networks Training Using Genetic Algorithms, vol. 26 of Series in Machine Perception and Artificial Intelligence, World Scientific, Singapore.Google Scholar
  59. Webb, G. I., 2000, Multiboosting: A technique for combining boosting and wagging, Machine Learning 40(2):159–196.CrossRefGoogle Scholar
  60. Whitehead, B. A., and Choate, T. D., 1996, Cooperative-competitive genetic evolution of radial basis function centres and widths for time series prediction, IEEE Transactions on Neural Networks 7(4):869–880.CrossRefGoogle Scholar
  61. Whitley, D., 1989, The GENITOR algorithm and selective pressure, in: Proc 3rd International Conf. on Genetic Algorithms, Morgan Kaufmann Publishers, Los Altos, CA, pp. 116–121.Google Scholar
  62. Whitley, D., and Kauth, J., 1988, GENITOR: A different genetic algorithm, in: Proceedings of the Rocky Mountain Conference on Artificial Intelligence, Denver, CO, pp. 118–130.Google Scholar
  63. Yao, X., 1999, Evolving artificial neural networks, Proceedings of the IEEE 9(87): 1423–1447.Google Scholar
  64. Yao, X., and Liu, Y., 1997, A new evolutionary system for evolving artificial neural networks, IEEE Transactions on Neural Networks 8(3):694–713.MathSciNetCrossRefGoogle Scholar
  65. Yao, X., and Liu, Y., 1998, Making use of population information in evolutionary artificial neural networks, IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics 28(3):417–425.CrossRefGoogle Scholar
  66. Zenobi, G., and Cunningham, P., 2001, Using diversity in preparing ensembles of classifiers based on different feature subsets to minimize generalization error, in: 12th European Conference on Machine Learning (ECML 2001), L. de Raedt and P. Flach, eds., Lecture Notes in Artificial Intelligence, vol. 2167, Springer-Verlag, pp. 576–587.Google Scholar
  67. Zhao, Q. F., Hammami, O., Kuroda, K., and Saito, K., 2000, Cooperative co-evolutionary algorithm—How to evaluate a module?, in: Proc. 1st IEEE Symposium of Evolutionary Computation and Neural Networks, San Antonio, TX, pp. 150–157.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2006

Authors and Affiliations

  1. 1.Department of Computing and Numerical AnalysisUniversity of CórdobaSpain

Personalised recommendations