Using Genetic Improvement and Code Transplants to Specialise a C++ Program to a Problem Class

  • Justyna Petke
  • Mark Harman
  • William B. Langdon
  • Westley Weimer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8599)

Abstract

Genetic Improvement (GI) is a form of Genetic Programming that improves an existing program. We use GI to evolve a faster version of a C++ program, a Boolean satisfiability (SAT) solver called MiniSAT, specialising it for a particular problem class, namely Combinatorial Interaction Testing (CIT), using automated code transplantation. Our GI-evolved solver achieves overall 17% improvement, making it comparable with average expert human performance. Additionally, this automatically evolved solver is faster than any of the human-improved solvers for the CIT problem.

Keywords

genetic improvement code transplants code specialisation Boolean satisfiability 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    MiniSAT-hack track of SAT competition. In 2009 this was part of the 12th International Conference on Theory and Applications of Satisfiability Testing (2009), http://www.satcompetition.org/2009/
  2. 2.
    Arcuri, A., Yao, X.: A Novel Co-evolutionary Approach to Automatic Software Bug Fixing. In: Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2008), June 1-6, pp. 162–168. IEEE Computer Society, Hong Kong (2008)Google Scholar
  3. 3.
    Bader-El-Den, M., Poli, R.: Generating SAT local-search heuristics using a GP hyper-heuristic framework. In: Monmarché, N., Talbi, E.-G., Collet, P., Schoenauer, M., Lutton, E. (eds.) EA 2007. LNCS, vol. 4926, pp. 37–49. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Banbara, M., Matsunaka, H., Tamura, N., Inoue, K.: Generating combinatorial test cases by efficient SAT encodings suitable for CDCL SAT solvers. In: Fermüller, C.G., Voronkov, A. (eds.) LPAR-17. LNCS, vol. 6397, pp. 112–126. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Beckman, L., Haraldson, A., Oskarsson, O., Sandewall, E.: A partial evaluator, and its use as a programming tool. Artificial Intelligence 7(4), 319–357 (1976)CrossRefMATHGoogle Scholar
  6. 6.
    Binkley, D., Danicic, S., Harman, M., Howroyd, J., Ouarbya, L.: A formal relationship between program slicing and partial evaluation. Formal Aspects of Computing 18(2), 103–119 (2006)CrossRefMATHGoogle Scholar
  7. 7.
    Cohen, D.M., Dalal, S.R., Fredman, M.L., Patton, G.C.: The AETG system: an approach to testing based on combinatorial design. IEEE Transactions on Software Engineering 23(7), 437–444 (1997)CrossRefGoogle Scholar
  8. 8.
    Colbourn, C.: Covering Array Tables (2013), http://www.public.asu.edu/~ccolbou/src/tabby/catable.html
  9. 9.
    Eén, N., Sörensson, N.: An extensible SAT-solver. In: Giunchiglia, E., Tacchella, A. (eds.) SAT 2003. LNCS, vol. 2919, pp. 502–518. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Fry, Z.P., Landau, B., Weimer, W.: A human study of patch maintainability. In: International Symposium on Software Testing and Analysis (ISSTA 2012), Minneapolis, Minnesota, USA (July 2012)Google Scholar
  11. 11.
    Futamura, Y.: Partial evaluation of computation process – an approach to a compiler-compiler. Systems, Computers, Controls 2(5), 721–728 (1971)Google Scholar
  12. 12.
    Garvin, B.J., Cohen, M.B., Dwyer, M.B.: Evaluating improvements to a meta-heuristic search for constrained interaction testing. Empirical Software Engineering 16(1), 61–102 (2011)CrossRefGoogle Scholar
  13. 13.
    Goues, C.L., Dewey-Vogt, M., Forrest, S., Weimer, W.: A systematic study of automated program repair: Fixing 55 out of 105 bugs for $8 each. In: International Conference on Software Engineering (ICSE 2012), Zurich, Switzerland (2012)Google Scholar
  14. 14.
    Harman, M., Langdon, W.B., Jia, Y., White, D.R., Arcuri, A., Clark, J.A.: The GISMOE challenge: Constructing the Pareto program surface using genetic programming to find better programs (keynote paper). In: 27th IEEE/ACM International Conference on Automated Software Engineering (ASE 2012), Essen, Germany (September 2012)Google Scholar
  15. 15.
    Harman, M., Langdon, W.B., Weimer, W.: Genetic programming for reverse engineering. In: Oliveto, R., Robbes, R. (eds.) 20th Working Conference on Reverse Engineering (WCRE 2013), October 14-17. IEEE, Koblenz (2013)Google Scholar
  16. 16.
    Kibria, R.H., Li, Y.: Optimizing the initialization of dynamic decision heuristics in DPLL SAT solvers using genetic programming. In: Collet, P., Tomassini, M., Ebner, M., Gustafson, S., Ekárt, A. (eds.) EuroGP 2006. LNCS, vol. 3905, pp. 331–340. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  17. 17.
    Langdon, W.B., Harman, M.: Optimising existing software with genetic programming. IEEE Transactions on Evolutionary Computation (to appear)Google Scholar
  18. 18.
    Langdon, W.B., Harman, M.: Evolving a CUDA kernel from an nVidia template. In: IEEE Congress on Evolutionary Computation, pp. 1–8. IEEE (2010)Google Scholar
  19. 19.
    Le Goues, C., Forrest, S., Weimer, W.: Current challenges in automatic software repair. Software Quality Journal 21(3), 421–443 (2013)CrossRefGoogle Scholar
  20. 20.
    Lei, Y., Kacker, R., Kuhn, D.R., Okun, V., Lawrence, J.: IPOG/IPOG-D: efficient test generation for multi-way combinatorial testing. Softw. Test., Verif. Reliab. 18(3), 125–148 (2008)CrossRefGoogle Scholar
  21. 21.
    Nanba, T., Tsuchiya, T., Kikuno, T.: Constructing test sets for pairwise testing: A SAT-based approach. In: ICNC, pp. 271–274. IEEE Computer Society (2011)Google Scholar
  22. 22.
    Nie, C., Leung, H.: A survey of combinatorial testing. ACM Computing Surveys 43(2), 11:1–11:29 (2011)Google Scholar
  23. 23.
    Orlov, M., Sipper, M.: Flight of the FINCH through the Java wilderness. IEEE Transactions on Evolutionary Computation 15(2), 166–182 (2011)CrossRefGoogle Scholar
  24. 24.
    Petke, J., Cohen, M.B., Harman, M., Yoo, S.: Efficiency and early fault detection with lower and higher strength combinatorial interaction testing. In: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, ESEC/FSE 2013, pp. 26–36. ACM, Saint Petersburg (2013)Google Scholar
  25. 25.
    Petke, J., Langdon, W.B., Harman, M.: Applying genetic improvement to MiniSAT. In: Ruhe, G., Zhang, Y. (eds.) SSBSE 2013. LNCS, vol. 8084, pp. 257–262. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  26. 26.
    Silva, J.P.M., Lynce, I., Malik, S.: Conflict-driven clause learning SAT solvers. In: Biere, A., Heule, M., van Maaren, H., Walsh, T. (eds.) Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, vol. 185, pp. 131–153. IOS Press (2009)Google Scholar
  27. 27.
    Sitthi-amorn, P., Modly, N., Weimer, W., Lawrence, J.: Genetic programming for shader simplification. ACM Trans. Graph. 30(6), 152 (2011)CrossRefGoogle Scholar
  28. 28.
    White, D.R., Clark, J., Jacob, J., Poulding, S.: Searching for resource-efficient programs: Low-power pseudorandom number generators. In: 2008 Genetic and Evolutionary Computation Conference (GECCO 2008), pp. 1775–1782. ACM Press, Atlanta (2008)Google Scholar
  29. 29.
    White, D.R., Arcuri, A., Clark, J.A.: Evolutionary improvement of programs. IEEE Transactions on Evolutionary Computation 15(4), 515–538 (2011)CrossRefGoogle Scholar
  30. 30.
    Zeller, A.: Yesterday, my program worked. Today, it does not. Why? In: Wang, J., Lemoine, M. (eds.) ESEC 1999 and ESEC-FSE 1999. LNCS, vol. 1687, pp. 253–267. Springer, Heidelberg (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Justyna Petke
    • 1
  • Mark Harman
    • 1
  • William B. Langdon
    • 1
  • Westley Weimer
    • 2
  1. 1.University College LondonLondonUK
  2. 2.University of VirginiaCharlottesvilleUSA

Personalised recommendations