Advertisement

A High-Performance, Pipelined, FPGA-Based Genetic Algorithm Machine

  • Barry Shackleford
  • Greg Snider
  • Richard J. Carter
  • Etsuko Okushi
  • Mitsuhiro Yasuda
  • Katsuhiko Seo
  • Hiroto Yasuura
Article

Abstract

Accelerating a genetic algorithm (GA) by implementing it in a reconfigurable field programmable gate array (FPGA) is described. The implemented GA features: random parent selection, which conserves selection circuitry; a steady-state memory model, which conserves chip area; survival of fitter child chromosomes over their less-fit parent chromosomes, which promotes evolution. A net child chromosome generation rate of one per clock cycle is obtained by pipelining the parent selection, crossover, mutation, and fitness evaluation functions. Complex fitness functions can be further pipelined to maintain a high-speed clock cycle. Fitness functions with a pipeline initiation interval of greater than one can be plurally implemented to maintain a net evaluated-chromosome throughput of one per clock cycle. Two prototypes are described: The first prototype (c. 1996 technology) is a multiple-FPGA chip implementation, running at a 1 MHz clock rate, that solves a 94-row × 520-column set covering problem 2,200× faster than a 100 MHz workstation running the same algorithm in C. The second prototype (Xilinx XVC300) is a single-FPGA chip implementation, running at a 66 MHZ clock rate, that solves a 36-residue protein folding problem in a 2-d lattice 320× faster than a 366 MHz Pentium II. The current largest FPGA (Xilinx XCV3200E) has circuitry available for the implementation of 30 fitness function units which would yield an acceleration of 9,600× for the 36-residue protein folding problem.

genetic algorithm genetic algorithm processor reconfigurable-computing FPGA 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    N. A. Baricelli, “Symbiogenetic evolutionary processes realized by artificial methods,” Methodos, vol. 9, no. 35–36, pp. 143–182, 1957.Google Scholar
  2. 2.
    G. E. P. Box, “Evolutionary operation: A method for increasing industrial productivity,” Journal of the Royal Statistical Society C, vol. 6, no. 2, pp. 81–101, 1957.Google Scholar
  3. 3.
    L. J. Fogel, A. J. Owens, and M. J. Walsh, Artificial Intelligence through Simulated Evolution, John Wiley & Sons: New York, 1966.Google Scholar
  4. 4.
    I. Rechenberg, Evolutionsstrategie: Optimierung Technisher Systeme nach Prinzipien der Biologischen Evolution, Frommann-Holzboog: Stuttgart, 1973 (second edition 1994).Google Scholar
  5. 5.
    J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, 1975 (second edition MIT Press, 1992).Google Scholar
  6. 6.
    D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley: Reading, MA, 1989.Google Scholar
  7. 7.
    M. Mitchell, An Introduction to Genetic Algorithms, MIT Press, 1996.Google Scholar
  8. 8.
    Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer-Verlag, Berlin, 1996, 3rd rev. edition.Google Scholar
  9. 9.
    R. L. Haupt and S. E. Haupt, Practical Genetic Algorithms, John Wiley & Sons: New York, 1998.Google Scholar
  10. 10.
    H. Mühlenbein, “Parallel genetic algorithms, population genetics, and combinatorial optimization,”. in Proc. Third Int. Conf. Genetic Algorithms, Morgan Kaufmann: San Francisco, 1989, pp. 416–421.Google Scholar
  11. 11.
    B. Shackleford, E. Okushi, M. Yasuda, H. Koizumi, K. Seo, and T. Iwamoto, “Hardware framework for accelerating the execution speed of a genetic algorithm,” IEICE Trans. Electron. vol. E80-C, no. 7, pp. 962–969, July 1997.Google Scholar
  12. 12.
    B. Shackleford, E. Okushi, M. Yasuda, H. Koizumi, K. Seo, T. Iwamoto, and Y. Yasuura, “A highperformance implementation of a survival-based genetic algorithm,” in Proc. Int. Conf. Neural Information Processing (ICONIP'97), November 1997, pp. 686–691.Google Scholar
  13. 13.
    P. Graham and B. Nelson, “A hardware genetic algorithm for the traveling salesman problem on Splash 2,” in Field-Programmable Logic and Applications, W. Moore and W. Luk (eds.), Springer: Oxford, 1995, pp. 352–361.Google Scholar
  14. 14.
    J. M. Arnold, D. A. Buell, and E. G. Davis, “Splash 2,” in Proc. 4th Annu. ACM Symposium Parallel Algorithms and Architectures, June 1992, pp. 316–324.Google Scholar
  15. 15.
    N. Sitkoff, M. Wazlowski, A. Smith, and H. Silverman, “Implementing a genetic altorithm on a parallel custom computing machine,” in Proc. IEEE Workshop FPGAs for Custom Computing Machines, 1995, pp. 180–187.Google Scholar
  16. 16.
    M. Wazlowski, A. Smith, R. Citro, and H. Silverman, “Armstrong III: A loosely coupled parallel processor with reconfigurable computing capabilities,” Technical Report, Division of Engineering, Brown University, 1994.Google Scholar
  17. 17.
    O. Kitaura, H. Asada, M. Matsuzaki, T. Kawai, H. Ando, and T. Shimada, “A custom computing machine for genetic algorithms without pipeline stalls,” in 1999 IEEE Int. Conf. Systems, Man, and Cybernetics, October 1999, pp. 577–584.Google Scholar
  18. 18.
    N. Yoshida, T. Yasuoka, T. Moriki, and T. Shimokawa, “VLSI hardware design for genetic algorithms and its parallel and distributed extensions,” Int. J. Knowledge-Based Intelligent Eng. Syst. to appear 2000.Google Scholar
  19. 19.
    I. Kajitani, M. Murakawa, D. Nishikawa, H. Yokoi, N. Kajihar, M. Iwata, D. Keymeulen, H. Sakanashi, and T. Higuchi, “An evolvable hardware chip for prosthetic hand controller,” in Proc. Seventh Int. Conf. Microelectronics for Neural, Fuzzy and Bio-inspire Systems, April 1999, pp. 179–186.Google Scholar
  20. 20.
    M. Murakawa, S. Yoshizawa, I. Kajitani, and T. Higuchi, “Evolvable hardware for generalized neural networks,” Fifteenth Int. Joint Conf. Artificial Intelligence, 1997, pp. 1146–1151.Google Scholar
  21. 21.
    M. Murakawa, S. Yoshizawa, I. Kajitani, X. Yao, N. Kajihara, M. Iwata, and T. Higuchi, “The GRD chip: genetic reconfiguration of DSPs for neural network processing,” IEEE Trans. Comput. vol. 48, no. 6, pp. 628–639, June 1999.Google Scholar
  22. 22.
    M. Salami, “Genetic algorithm processor on reprogrammable architectures,” in Proc. Fifth Annu. Conf. Evolutionary Programming, L. J. Fogel, P. J. Angeline, and T. Bäack (eds.), March 1996, pp. 355–361.Google Scholar
  23. 23.
    I. M. Bland and G. M. Megson, “Implementing a generic systolic array for genetic algorithms,” in Proc. First On-Line Workshop on Soft Computing, 1996, pp. 268–273.Google Scholar
  24. 24.
    S. D. Scott, A. Samal, and S. Seth, “HGA: A hardware-based genetic algorithm,” in Proc.1995 ACM/SIGDA Third Int. Symposium on Field-Programmable Gate Arrays, 1995, pp. 53–59.Google Scholar
  25. 25.
    P. K. Chan, “A field-programmable prototyping board: XC4000 BORG user's guide,” Board of Studies in Computer Engineering, University of California, Santa Cruz, April 1994.Google Scholar
  26. 26.
    B. C. H. Turton and T. Arslan, “A parallel genetic VLSI architecture for combinatorial real-time application.disc scheduling,” in Proc. IEE Colloquium on Genetic Algorithms in Image Processing and Vision, October 1994, pp. 11/1–6.Google Scholar
  27. 27.
    G. Tufte and P. C. Haddow, “Prototyping a GA pipeline for complete hardware evolution,” in Proc. First NASA/DoD Workshop on Evolvable Hardware, July 1999, pp. 18–25.Google Scholar
  28. 28.
    S. Forrest and M. Mitchell, “Relative building block fittness and building block hypothesis,” in L. D. Whitley (ed.), Foundations of Genetic Algorithms 2, Morgan Kaufmann: San Francisco, 1993.Google Scholar
  29. 29.
    M. Mitchell, S. Forrest, and J. H. Holland, “The royal road for genetic algorithms: Fitness landscapes and GA performance,” in F. J. Varela and P. Bourgine (eds.), Towards a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life, MIT Press: Cambridge, MA, 1992.Google Scholar
  30. 30.
    M. Mitchell, J. H. Holland, and S. Forrest, “When will a genetic algorithm outperform hill climbing?” in J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems 6, Morgan Kaufmann: San Francisco, 1994.Google Scholar
  31. 31.
    D. A. Patterson and J. L. Hennessy, Computer Architecture: A Quantitative Approach, Morgan Kaufmann: San Francisco, 1990.Google Scholar
  32. 32.
    S. Wolfram, “Random sequence generation by cellular automata,” Advances Appl. Math., vol. 7, pp. 123–169, 1986 (also in S. Wolfram, Theory and Applications of Cellular Automata, World Scientific: Singapore, 1986).Google Scholar
  33. 33.
    G. Syswerda, “Uniform crossover in genetic algorithms,” in Proc. Third Int. Conf. Genetic Algorithms, Morgan Kaufmann: San Francisco, 1989, pp. 2–9.Google Scholar
  34. 34.
    T. H. Cormen, C. E. Leiserson, and R. L. Rivest, Introduction to Algorithms, MIT Press: Cambridge, MA, 1990.Google Scholar
  35. 35.
    T. Iwamoto, “Genetic algorithms for set covering problems,” The Sixth Intelligent System Symposium, The Japan Society of Mechanical Engineers, Osaka, pp. 73–74, October 1996, pp. 73.74 (in Japanese).Google Scholar
  36. 36.
    O. Coudert, “On solving covering problems,” in Proc. 33rd Design Automation Conf. June 1996, pp. 197–202.Google Scholar
  37. 37.
    E. L. McCluskey, Jr., “Minimization of boolean functions,” Bell System Tech. J. vol. 35, pp. 1417–1444, April 1959.Google Scholar
  38. 38.
    W. V. Quine, “On cores and prime implicants of truth functions,” Am. Math. Month. vol. 66, pp. 755–760, 1959.Google Scholar
  39. 39.
    D. D. Gajski, Principles of Digital Design, Prentice-Hall: Englewood Cliffs, NJ, 1997.Google Scholar
  40. 40.
    M. Karnaugh, “A map method for synthesis of combinatorial logic circuits,” Trans. AIEE Commun. Electron. vol. 72, part I, pp. 593–599, November 1953.Google Scholar
  41. 41.
    K. F. Lau and K. A. Dill, “Theory for protein mutability and biogenesis,” Proc. Natl. Acad. Sci. USA vol. 87, pp. 638–642, January 1990.Google Scholar
  42. 42.
    R. Unger and J. Moult, “A genetic algorithm for 3d prote in folding simulations,” in Proc. Fifth International Conf. Genetic Algorithms, S. Forrest (ed.), pp. 581–588, 1993.Google Scholar
  43. 43.
    R. Unger and J. Moult, “Finding the lowest free energy conformation of a protein is an NP-hard problem: proof and implications,” Bulletin of Mathematical Biology, vol. 55, no. 6, pp. 1183–1198, 1993.Google Scholar
  44. 44.
    W. B. Culbertson, T. Osame, Y. Otsuru, J. B. Shackleford, and M. Tanaka, “The HP Tsutsuji logic synthesis system,” Hewlett-Packard Journal, pp. 38–51, Aug. 1993.Google Scholar
  45. 45.
    H. Koizumi, K. Seo, F. Suzuki, Y. Ohtsuru, and H. Yasuura, “A proposal for a co-design method in control systems using combination of models,” IEICE Trans. Inf. and Systems, vol. E78-D, No. 3, pp. 237–247, March 1995.Google Scholar
  46. 46.
    G. Snider, B. Shackleford, R. J. Carter, “Attacking the semantic gap between application programming languages and configurable hardware,” submitted to 2001 ACM/SIGDA Eighth International Symposium on Field Programmable Gate Arrays, 11 pages, February 2001.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Barry Shackleford
    • 1
  • Greg Snider
    • 2
  • Richard J. Carter
    • 3
  • Etsuko Okushi
    • 4
  • Mitsuhiro Yasuda
    • 5
  • Katsuhiko Seo
    • 6
  • Hiroto Yasuura
    • 7
  1. 1.Hewlett-Packard LaboratoriesPalo AltoU.S.A.
  2. 2.Hewlett-Packard LaboratoriesPalo AltoU.S.A.
  3. 3.Hewlett-Packard LaboratoriesPalo AltoU.S.A.
  4. 4.Mitsubishi Electric CorporationKanagawaJapan
  5. 5.Mitsubishi Electric CorporationKanagawaJapan
  6. 6.Mitsubishi Electric CorporationKanagawaJapan
  7. 7.Kyushu UniversityKasuga-shiJapan

Personalised recommendations