Search Based Software Engineering: Techniques, Taxonomy, Tutorial

  • Mark Harman
  • Phil McMinn
  • Jerffeson Teixeira de Souza
  • Shin Yoo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7007)

Abstract

The aim of Search Based Software Engineering (SBSE) research is to move software engineering problems from human-based search to machine-based search, using a variety of techniques from the metaheuristic search, operations research and evolutionary computation paradigms. The idea is to exploit humans’ creativity and machines’ tenacity and reliability, rather than requiring humans to perform the more tedious, error prone and thereby costly aspects of the engineering process. SBSE can also provide insights and decision support. This tutorial will present the reader with a step-by-step guide to the application of SBSE techniques to Software Engineering. It assumes neither previous knowledge nor experience with Search Based Optimisation. The intention is that the tutorial will cover sufficient material to allow the reader to become productive in successfully applying search based optimisation to a chosen Software Engineering problem of interest.

Keywords

Software Engineer Pareto Front Test Suite Hill Climbing Chemical Abstract Service 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    ACM. The 1998 ACM computing classification system (2009), http://www.acm.org/about/class/1998
  2. 2.
    Adamopoulos, K., Harman, M., Hierons, R.M.: How to Overcome the Equivalent Mutant Problem and Achieve Tailored Selective Mutation Using Co-evolution. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 1338–1349. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  3. 3.
    Afzal, W., Torkar, R., Feldt, R.: A systematic review of search-based testing for non-functional system properties. Information and Software Technology 51(6), 957–976 (2009)CrossRefGoogle Scholar
  4. 4.
    Ali, S., Briand, L.C., Hemmati, H., Panesar-Walawege, R.K.: A systematic review of the application and empirical investigation of search-based test-case generation. IEEE Transactions on Software Engineering (2010) to appearGoogle Scholar
  5. 5.
    Antoniol, G., Gueorguiev, S., Harman, M.: Software project planning for robustness and completion time in the presence of uncertainty using multi objective search based software engineering. In: ACM Genetic and Evolutionary Computation COnference (GECCO 2009), Montreal, Canada, July 8-12, pp. 1673–1680 (2009)Google Scholar
  6. 6.
    Antoniol, G., Di Penta, M., Harman, M.: Search-based techniques applied to optimization of project planning for a massive maintenance project. In: 21st IEEE International Conference on Software Maintenance, pp. 240–249. IEEE Computer Society Press, Los Alamitos (2005)CrossRefGoogle Scholar
  7. 7.
    Arcuri, A.: It does matter how you normalise the branch distance in search based software testing. In: Proceedings of the International Conference on Software Testing, Verification and Validation, pp. 205–214. IEEE (2010)Google Scholar
  8. 8.
    Arcuri, A., Briand, L.: A practical guide for using statistical tests to assess randomized algorithms in software engineering. In: 33rd International Conference on Software Engineering (ICSE 2011), pp. 1–10. ACM, New York (2011)Google Scholar
  9. 9.
    Arcuri, A., White, D.R., Yao, X.: Multi-objective Improvement of Software Using Co-evolution and Smart Seeding. In: Li, X., Kirley, M., Zhang, M., Green, D., Ciesielski, V., Abbass, H.A., Michalewicz, Z., Hendtlass, T., Deb, K., Tan, K.C., Branke, J., Shi, Y. (eds.) SEAL 2008. LNCS, vol. 5361, pp. 61–70. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Arcuri, A., Yao, X.: Coevolving Programs and Unit Tests from their Specification. In: Proceedings of the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2007), Atlanta, Georgia, USA, November 5-9, pp. 397–400. ACM (2007)Google Scholar
  11. 11.
    Arcuri, A., Yao, X.: A Novel Co-evolutionary Approach to Automatic Software Bug Fixing. In: Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2008), Hongkong, China, June 1-6, pp. 162–168. IEEE Computer Society (2008)Google Scholar
  12. 12.
    Asadi, F., Antoniol, G., Guéhéneuc, Y.-G.: Concept locations with genetic algorithms: A comparison of four distributed architectures. In: Proceedings of 2nd International Symposium on Search based Software Engineering (SSBSE 2010), Benevento, Italy. IEEE Computer Society Press (2010) to appearGoogle Scholar
  13. 13.
    Bagnall, A.J., Rayward-Smith, V.J., Whittley, I.M.: The next release problem. Information and Software Technology 43(14), 883–890 (2001)CrossRefGoogle Scholar
  14. 14.
    Baker, J.E.: Reducing bias and inefficiency in the selection algorithm. In: Proceedings of the 2nd International Conference on Genetic Algorithms and their Application, Hillsdale, New Jersey, USA, Lawrence Erlbaum Associates (1987)Google Scholar
  15. 15.
    Binkley, D., Harman, M., Lakhotia, K.: FlagRemover: A testability transformation for transforming loop assigned flags. ACM Transactions on Software Engineering and Methodology. (2010) to appearGoogle Scholar
  16. 16.
    Black, J., Melachrinoudis, E., Kaeli, D.: Bi-criteria models for all-uses test suite reduction. In: Proceedings of the 26th International Conference on Software Engineering (ICSE 2004), pp. 106–115. ACM Press (May 2004)Google Scholar
  17. 17.
    Bowman, M., Briand, L.C., Labiche, Y.: Solving the Class Responsibility Assignment Problem in Object-Oriented Analysis with Multi-Objective Genetic Algorithms. Technical Report SCE-07-02, Carleton University (August. 2008)Google Scholar
  18. 18.
    Burgess, C.J., Lefley, M.: Can genetic programming improve software effort estimation? a comparative evaluation. Information and Software Technology 43, 863–873 (2001)CrossRefGoogle Scholar
  19. 19.
    Burke, E., Kendall, G.: Search Methodologies. Introductory tutorials in optimization and decision support techniques. Springer, Heidelberg (2005)MATHGoogle Scholar
  20. 20.
    Chen, T.Y., Lau, M.F.: Heuristics towards the optimization of the size of a test suite. In: Proceedings of the 3rd International Conference on Software Quality Management, vol. 2, pp. 415–424 (1995)Google Scholar
  21. 21.
    Clark, J., Dolado, J.J., Harman, M., Hierons, R.M., Jones, B., Lumkin, M., Mitchell, B., Mancoridis, S., Rees, K., Roper, M., Shepperd, M.: Reformulating software engineering as a search problem. IEE Proceedings — Software 150(3), 161–175 (2003)CrossRefGoogle Scholar
  22. 22.
    Crescenzi, P., Kann, V. (eds.): A compendium of NP-optimization problems, http://www.nada.kth.se/
  23. 23.
    Deb, K., Goldberg, D.: A comparative analysis of selection schemes used in genetic algorithms. In: Foundations of Genetic Algorithms, pp. 69–93. Morgan Kaufmann, San Francisco (1991)Google Scholar
  24. 24.
    Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6, 182–197 (2002)CrossRefGoogle Scholar
  25. 25.
    Dolado, J.J.: On the problem of the software cost function. Information and Software Technology 43(1), 61–72 (2001)CrossRefGoogle Scholar
  26. 26.
    Dolado, J.J.: A Validation of the Component-based Method for Software Size Estimation. IEEE Transactions on Software Engineering 26(10), 1006–1021 (2000)CrossRefGoogle Scholar
  27. 27.
    Durillo, J.J., Zhang, Y., Alba, E., Nebro, A.J.: A Study of the Multi-Objective Next Release Problem. In: Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE 2009), Cumberland Lodge, Windsor, UK, May 13-15, pp. 49–58. IEEE Computer Society Press (2009)Google Scholar
  28. 28.
    Elbaum, S.G., Malishevsky, A.G., Rothermel, G.: Prioritizing test cases for regression testing. In: International Symposium on Software Testing and Analysis, pp. 102–112. ACM Press (2000)Google Scholar
  29. 29.
    Fatiregun, D., Harman, M., Hierons, R.: Evolving transformation sequences using genetic algorithms. In: 4th International Workshop on Source Code Analysis and Manipulation (SCAM 2004), pp. 65–74. IEEE Computer Society Press, Los Alamitos (2004)CrossRefGoogle Scholar
  30. 30.
    Fatiregun, D., Harman, M., Hierons, R.: Search-based amorphous slicing. In: 12th International Working Conference on Reverse Engineering (WCRE 2005), pp. 3–12. Carnegie Mellon University, Pittsburgh (2005)CrossRefGoogle Scholar
  31. 31.
    Finkelstein, A., Harman, M., Afshin Mansouri, S., Ren, J., Zhang, Y.: “Fairness Analysis” in Requirements Assignments. In: Proceedings of the 16th IEEE International Requirements Engineering Conference (RE 2008), Barcelona, Catalunya, Spain, September 8-12, pp. 115–124. IEEE Computer Society (2008)Google Scholar
  32. 32.
    Foster, I.: Designing and building parallel programs:Concepts and tools for parallel software. Addison-Wesley (1995)Google Scholar
  33. 33.
    Sapna, P.G., Mohanty, H.: Automated Test Scenario Selection Based on Levenshtein Distance. In: Janowski, T., Mohanty, H. (eds.) ICDCIT 2010. LNCS, vol. 5966, pp. 255–266. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  34. 34.
    Garey, M.R., Johnson, D.S.: Computers and Intractability: A guide to the theory of NP-Completeness. W. H. Freeman and Company (1979)Google Scholar
  35. 35.
    Gu, Q., Tang, B., Chen, D.: Optimal regression testing based on selective coverage of test requirements. In: International Symposium on Parallel and Distributed Processing with Applications (ISPA 2010), pp. 419–426 (September 2010)Google Scholar
  36. 36.
    Harman, M.: The current state and future of search based software engineering. In: Briand, L., Wolf, A. (eds.) Future of Software Engineering 2007, pp. 342–357. IEEE Computer Society Press, Los Alamitos (2007)Google Scholar
  37. 37.
    Harman, M.: Search based software engineering for program comprehension. In: 15th International Conference on Program Comprehension (ICPC 2007), Banff, Canada, pp. 3–13. IEEE Computer Society Press (2007)Google Scholar
  38. 38.
    Harman, M.: The relationship between search based software engineering and predictive modeling. In: 6th International Conference on Predictive Models in Software Engineering, Article Number 1, Timisoara, Romania (2010) (keynote paper)Google Scholar
  39. 39.
    Harman, M.: Why the Virtual Nature of Software Makes It Ideal for Search Based Optimization. In: Rosenblum, D.S., Taentzer, G. (eds.) FASE 2010. LNCS, vol. 6013, pp. 1–12. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  40. 40.
    Harman, M.: Making the case for MORTO: Multi objective regression test optimization. In: 1st International Workshop on Regression Testing (Regression 2011), Berlin, Germany (March 2011)Google Scholar
  41. 41.
    Harman, M.: Refactoring as testability transformation. In: Refactoring and Testing Workshop (RefTest 2011), Berlin, Germany (March 2011)Google Scholar
  42. 42.
    Harman, M., Clark, J.: Metrics are fitness functions too. In: 10th International Software Metrics Symposium (METRICS 2004), pp. 58–69. IEEE Computer Society Press, Los Alamitos (2004)CrossRefGoogle Scholar
  43. 43.
    Harman, M., Hassoun, Y., Lakhotia, K., McMinn, P., Wegener, J.: The impact of input domain reduction on search-based test data generation. In: ACM Symposium on the Foundations of Software Engineering (FSE 2007), Dubrovnik, Croatia, pp. 155–164. Association for Computer Machinery (September 2007)Google Scholar
  44. 44.
    Harman, M., Hierons, R.M.: An overview of program slicing. Software Focus 2(3), 85–92 (2001)CrossRefGoogle Scholar
  45. 45.
    Harman, M., Jones, B.F.: Search based software engineering. Information and Software Technology 43(14), 833–839 (2001)CrossRefGoogle Scholar
  46. 46.
    Harman, M., Krinke, J., Ren, J., Yoo, S.: Search based data sensitivity analysis applied to requirement engineering. In: ACM Genetic and Evolutionary Computation Conference (GECCO 2009), Montreal, Canada, July 8-12, pp. 1681–1688 (2009)Google Scholar
  47. 47.
    Harman, M., Lakhotia, K., McMinn, P.: A Multi-Objective Approach to Search-based Test Data Generation. In: Proceedings of the 9th annual Conference on Genetic and Evolutionary Computation (GECCO 2007), London, England, July 7-11, pp. 1098–1105. ACM (2007)Google Scholar
  48. 48.
    Harman, M., Mansouri, A., Zhang, Y.: Search based software engineering: A comprehensive analysis and review of trends techniques and applications. Technical Report TR-09-03, Department of Computer Science, King’s College London (April 2009)Google Scholar
  49. 49.
    Harman, M., McMinn, P.: A theoretical and empirical analysis of evolutionary testing and hill climbing for structural test data generation. In: International Symposium on Software Testing and Analysis (ISSTA 2007), London, United Kingdom, pp. 73–83. Association for Computer Machinery (2007)Google Scholar
  50. 50.
    Harman, M., McMinn, P.: A theoretical and empirical study of search based testing: Local, global and hybrid search. IEEE Transactions on Software Engineering 36(2), 226–247 (2010)CrossRefGoogle Scholar
  51. 51.
    Harman, M., Swift, S., Mahdavi, K.: An empirical study of the robustness of two module clustering fitness functions. In: Genetic and Evolutionary Computation Conference (GECCO 2005), Washington DC, USA, pp. 1029–1036. Association for Computer Machinery (2005)Google Scholar
  52. 52.
    Harman, M., Tratt, L.: Pareto optimal search-based refactoring at the design level. In: GECCO 2007: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, pp. 1106–1113. ACM Press, London (2007)Google Scholar
  53. 53.
    Jean Harrold, M., Gupta, R., Lou Soffa, M.: A methodology for controlling the size of a test suite. ACM Transactions on Software Engineering and Methodology 2(3), 270–285 (1993)CrossRefGoogle Scholar
  54. 54.
    Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor (1975)Google Scholar
  55. 55.
    Ince, D.C., Hekmatpour, S.: Empirical evaluation of random testing. The Computer Journal 29(4) (August 1986)Google Scholar
  56. 56.
    Kirkpatrick, S., Gellat, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)MathSciNetCrossRefMATHGoogle Scholar
  57. 57.
    Kirsopp, C., Shepperd, M., Hart, J.: Search heuristics, case-based reasoning and software project effort prediction. In: GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, July 9-13, pp. 1367–1374. Morgan Kaufmann Publishers, San Francisco (2002)Google Scholar
  58. 58.
    Kirsopp, C., Shepperd, M.J., Hart, J.: Search heuristics, case-based reasoning and software project effort prediction. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2002, pp. 1367–1374. Morgan Kaufmann Publishers Inc., San Francisco (2002)Google Scholar
  59. 59.
    Korel, B.: Automated software test data generation. IEEE Transactions on Software Engineering 16(8), 870–879 (1990)CrossRefGoogle Scholar
  60. 60.
    Lakhotia, K., Harman, M., McMinn, P.: Handling dynamic data structures in search based testing. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2008), pp. 1759–1766. ACM Press, Atlanta (2008)Google Scholar
  61. 61.
    Lehre, P.K., Yao, X.: Runtime analysis of search heuristics on software engineering problems. Frontiers of Computer Science in China 3(1), 64–72 (2009)CrossRefGoogle Scholar
  62. 62.
    Mahdavi, K., Harman, M., Mark Hierons, R.: A multiple hill climbing approach to software module clustering. In: IEEE International Conference on Software Maintenance, pp. 315–324. IEEE Computer Society Press, Los Alamitos (2003)Google Scholar
  63. 63.
    Maia, C.L.B., do Carmo, R.A.F., de Freitas, F.G., Lima de Campos, G.A., de Souza, J.T.: A multi-objective approach for the regression test case selection problem. In: Proceedings of Anais do XLI Simpòsio Brasileiro de Pesquisa Operacional (SBPO 2009), pp. 1824–1835 (2009)Google Scholar
  64. 64.
    Mancoridis, S., Mitchell, B.S., Rorres, C., Chen, Y.-F., Gansner, E.R.: Using automatic clustering to produce high-level system organizations of source code. In: International Workshop on Program Comprehension (IWPC 1998), pp. 45–53. IEEE Computer Society Press, Los Alamitos (1998)Google Scholar
  65. 65.
    McMinn, P.: Search-based software test data generation: A survey. Software Testing, Verification and Reliability 14(2), 105–156 (2004)CrossRefGoogle Scholar
  66. 66.
    McMinn, P.: Search-based testing: Past, present and future. In: Proceedings of the 3rd International Workshop on Search-Based Software Testing (SBST 2011). IEEE, Berlin (to appear, 2011)Google Scholar
  67. 67.
    Mitchell, B.S., Mancoridis, S.: Using heuristic search techniques to extract design abstractions from source code. In: GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, July 9-13, pp. 1375–1382. Morgan Kaufmann Publishers, San Francisco (2002)Google Scholar
  68. 68.
    Mitchell, B.S., Mancoridis, S.: On the automatic modularization of software systems using the bunch tool. IEEE Transactions on Software Engineering 32(3), 193–208 (2006)CrossRefGoogle Scholar
  69. 69.
    Mitchell, B.S., Traverso, M., Mancoridis, S.: An architecture for distributing the computation of software clustering algorithms. In: IEEE/IFIP Proceedings of the Working Conference on Software Architecture (WICSA 2001), pp. 181–190. IEEE Computer Society, Amsterdam (2001)CrossRefGoogle Scholar
  70. 70.
    Mitchell, M., Forrest, S., Holland, J.H.: The royal road for genetic algorithms: Fitness landscapes and GA performance. In: Varela, F.J., Bourgine, P. (eds.) Proc. of the First European Conference on Artificial Life, pp. 245–254. MIT Press, Cambridge (1992)Google Scholar
  71. 71.
    Mühlenbein, H., Schlierkamp-Voosen, D.: Predictive models for the breeder genetic algorithm: I. continuous parameter optimization. Evolutionary Computation 1(1), 25–49 (1993)CrossRefGoogle Scholar
  72. 72.
    Munawar, A., Wahib, M., Munetomo, M., Akama, K.: A survey: Genetic algorithms and the fast evolving world of parallel computing. In: 10th IEEE International Conference on High Performance Computing and Communications (HPCC 2008), pp. 897–902. IEEE (2008)Google Scholar
  73. 73.
    Munroe, R.: XKCD: Significant, http://xkcd.com/882/
  74. 74.
    Offutt, J., Pan, J., Voas, J.: Procedures for reducing the size of coverage-based test sets. In: Proceedings of the 12th International Conference on Testing Computer Software, pp. 111–123 (June 1995)Google Scholar
  75. 75.
    O’Keeffe, M., Ó Cinnéide, M.: Search-based refactoring: an empirical study. Journal of Software Maintenance 20(5), 345–364 (2008)Google Scholar
  76. 76.
    Pinto, G.H.L., Vergilio, S.R.: A multi-objective genetic algorithm to test data generation. In: 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2010), pp. 129–134. IEEE Computer Society (2010)Google Scholar
  77. 77.
    Praditwong, K., Harman, M., Yao, X.: Software module clustering as a multi-objective search problem. IEEE Transactions on Software Engineering (to appear, 2011)Google Scholar
  78. 78.
    Räihä, O.: A survey on search–based software design. Computer Science Review 4(4), 203–249 (2010)CrossRefGoogle Scholar
  79. 79.
    Reid, S.C.: An empirical analysis of equivalence partitioning, boundary value analysis and random testing. In: 4th International Software Metrics Symposium. IEEE Computer Society Press, Los Alamitos (1997)Google Scholar
  80. 80.
    Rothermel, G., Harrold, M., Ronne, J., Hong, C.: Empirical studies of test suite reduction. Software Testing, Verification, and Reliability 4(2), 219–249 (2002)CrossRefGoogle Scholar
  81. 81.
    Rothermel, G., Harrold, M.J., Ostrin, J., Hong, C.: An empirical study of the effects of minimization on the fault detection capabilities of test suites. In: Proceedings of International Conference on Software Maintenance (ICSM 1998), Bethesda, Maryland, USA, pp. 34–43. IEEE Computer Society Press (November 1998)Google Scholar
  82. 82.
    Ruhe, G., Greer, D.: Quantitative Studies in Software Release Planning under Risk and Resource Constraints. In: Proceedings of the International Symposium on Empirical Software Engineering (ISESE 2003), Rome, Italy, September 29 - October 4, pp. 262–270. IEEE (2003)Google Scholar
  83. 83.
    Ryan, C.: Automatic re-engineering of software using genetic programming. Kluwer Academic Publishers (2000)Google Scholar
  84. 84.
    Saliu, M.O., Ruhe, G.: Bi-objective release planning for evolving software systems. In: Crnkovic, I., Bertolino, A. (eds.) Proceedings of the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering (ESEC/FSE) 2007, pp. 105–114. ACM (September 2007)Google Scholar
  85. 85.
    Seng, O., Stammel, J., Burkhart, D.: Search-based determination of refactorings for improving the class structure of object-oriented systems. In: Genetic and Evolutionary Computation Conference (GECCO 2006), Seattle, Washington, USA, July 8-12, vol. 2, pp. 1909–1916. ACM Press (2006)Google Scholar
  86. 86.
    Shaw, M.: Writing good software engineering research papers: minitutorial. In: Proceedings of the 25th International Conference on Software Engineering (ICSE 2003), Piscataway, NJ, May 3-10, pp. 726–737. IEEE Computer Society (2003)Google Scholar
  87. 87.
    Shepperd, M.J.: Foundations of software measurement. Prentice Hall (1995)Google Scholar
  88. 88.
    Simons, C.L., Parmee, I.C.: Agent-based Support for Interactive Search in Conceptual Software Engineering Design. In: Keijzer, M. (ed.) Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO 2008), Atlanta, GA, USA, July 12-16, pp. 1785–1786. ACM (2008)Google Scholar
  89. 89.
    Simons, C.L., Parmee, I.C., Gwynllyw, R.: Interactive, evolutionary search in upstream object-oriented class design. IEEE Transactions on Software Engineering 36(6), 798–816 (2010)CrossRefGoogle Scholar
  90. 90.
    de Souza, J.T., Maia, C.L., de Freitas, F.G., Coutinho, D.P.: The human competitiveness of search based software engineering. In: Proceedings of 2nd International Symposium on Search based Software Engineering (SSBSE 2010), Benevento, Italy, pp. 143–152. IEEE Computer Society Press (2010)Google Scholar
  91. 91.
    Sutton, A.M., Howe, A.E., Whitley, L.D.: Estimating Bounds on Expected Plateau Size in MAXSAT Problems. In: Stützle, T., Birattari, M., Hoos, H.H. (eds.) SLS 2009. LNCS, vol. 5752, pp. 31–45. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  92. 92.
    Tonella, P., Susi, A., Palma, F.: Using interactive ga for requirements prioritization. In: Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE 2010), Benevento, Italy, September 7-9, pp. 57–66. IEEE (2010)Google Scholar
  93. 93.
    Tracey, N., Clark, J., Mander, K., McDermid, J.: An automated framework for structural test-data generation. In: Proceedings of the International Conference on Automated Software Engineering, Hawaii, USA, pp. 285–288. IEEE Computer Society Press (1998)Google Scholar
  94. 94.
    Turing, A.M.: Computing machinery and intelligence. Mind 49, 433–460 (1950)MathSciNetCrossRefGoogle Scholar
  95. 95.
    Wada, H., Champrasert, P., Suzuki, J., Oba, K.: Multiobjective Optimization of SLA-Aware Service Composition. In: Proceedings of IEEE Workshop on Methodologies for Non-functional Properties in Services Computing, Honolulu, HI, USA, July 6-11, pp. 368–375. IEEE (2008)Google Scholar
  96. 96.
    Wang, H., Chan, W.K., Tse, T.H.: On the construction of context-aware test suites. Technical Report TR-2010-01, Hong Kong University (2010)Google Scholar
  97. 97.
    Wegener, J., Baresel, A., Sthamer, H.: Evolutionary test environment for automatic structural testing. Information and Software Technology 43(14), 841–854 (2001)CrossRefGoogle Scholar
  98. 98.
    Wen, F., Lin, C.-M.: Multistage Human Resource Allocation for Software Development by Multiobjective Genetic Algorithm. The Open Applied Mathematics Journal 2, 95–103 (2008)MathSciNetCrossRefMATHGoogle Scholar
  99. 99.
    White, D.R., Clark, J.A., Jacob, J., Poulding, S.M.: Searching for Resource-Efficient Programs: Low-Power Pseudorandom Number Generators. In: Keijzer, M. (ed.) Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO 2008), Atlanta, GA, USA, July 12-16, pp. 1775–1782. ACM (2008)Google Scholar
  100. 100.
    Whitley, D.: The GENITOR algorithm and selection pressure: Why rank-based allocation of reproductive trials is best. In: Schaffer, J.D. (ed.) Proceedings of the International Conference on Genetic Algorithms, San Mateo, California, USA, pp. 116–121. Morgan Kaufmann (1989)Google Scholar
  101. 101.
    Whitley, D.: A genetic algorithm tutorial. Statistics and Computing 4, 65–85 (1994)CrossRefGoogle Scholar
  102. 102.
    Whitley, D.: An overview of evolutionary algorithms: practical issues and common pitfalls. Information and Software Technology 43(14), 817–831 (2001)CrossRefGoogle Scholar
  103. 103.
    Whitley, D., Sutton, A.M., Howe, A.E.: Understanding elementary landscapes. In: Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO 2008), pp. 585–592. ACM, New York (2008)CrossRefGoogle Scholar
  104. 104.
    Williams, K.P.: Evolutionary Algorithms for Automatic Parallelization. PhD thesis, University of Reading, UK, Department of Computer Science (September 1998)Google Scholar
  105. 105.
    Yoo, S.: A novel mask-coding representation for set cover problems with applications in test suite minimisation. In: Proceedings of the 2nd International Symposium on Search-Based Software Engineering, SSBSE 2010 (2010)Google Scholar
  106. 106.
    Yoo, S., Harman, M.: Pareto efficient multi-objective test case selection. In: International Symposium on Software Testing and Analysis (ISSTA 2007), pp. 140–150. Association for Computer Machinery, London (2007)Google Scholar
  107. 107.
    Yoo, S., Harman, M.: Using hybrid algorithm for pareto efficient multi-objective test suite minimisation. Journal of Systems and Software 83(4), 689–701 (2010)CrossRefGoogle Scholar
  108. 108.
    Yoo, S., Harman, M.: Regression testing minimisation, selection and prioritisation: A survey. Journal of Software Testing, Verification and Reliability (to appear, 2011)Google Scholar
  109. 109.
    Yoo, S., Harman, M., Tonella, P., Susi, A.: Clustering test cases to achieve effective and scalable prioritisation incorporating expert knowledge. In: ACM International Conference on Software Testing and Analysis (ISSTA 2009), Chicago, Illinois, USA, July 19-23, pp. 201–212 (2009)Google Scholar
  110. 110.
    Yoo, S., Harman, M., Ur, S.: Highly scalable multi-objective test suite minimisation using graphics card. Rn/11/07, Department of Computer Science, University College London (January 2011)Google Scholar
  111. 111.
    Zhang, Y.-Y., Finkelstein, A., Harman, M.: Search Based Requirements Optimisation: Existing Work and Challenges. In: Rolland, C. (ed.) REFSQ 2008. LNCS, vol. 5025, pp. 88–94. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  112. 112.
    Zhang, Y., Harman, M., Finkelstein, A., Mansouri, A.: Comparing the performance of metaheuristics for the analysis of multi-stakeholder tradeoffs in requirements optimisation. Journal of Information and Software Technology (to appear, 2011)Google Scholar
  113. 113.
    Zhang, Y., Harman, M., Mansouri, A.: The multi-objective next release problem. In: GECCO 2007: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, pp. 1129–1137. ACM Press, London (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Mark Harman
    • 1
  • Phil McMinn
    • 2
  • Jerffeson Teixeira de Souza
    • 3
  • Shin Yoo
    • 1
  1. 1.University College LondonUK
  2. 2.University of SheffieldUK
  3. 3.State University of CearáBrazil

Personalised recommendations