Advertisement

Software Testing

  • Gordon FraserEmail author
  • José Miguel Rojas
Chapter

Abstract

Any nontrivial program contains some errors in the source code. These “bugs” are annoying for users if they lead to application crashes and data loss, and they are worrisome if they lead to privacy leaks and security exploits. The economic damage caused by software bugs can be huge, and when software controls safety critical systems such as automotive software, then bugs can kill people. The primary tool to reveal and eliminate bugs is software testing: Testing a program means executing it with a selected set of inputs and checking whether the program behaves in the expected way; if it does not, then a bug has been detected. The aim of testing is to find as many bugs as possible, but it is a difficult task as it is impossible to run all possible tests on a program. The challenge of being a good tester is thus to identify which are the best tests that help us find bugs, and to execute them as efficiently as possible. In this chapter, we explore different ways to measure how “good” a set of tests is, as well as techniques to generate good sets of tests.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Acree, A.T.: On mutation. PhD thesis, Georgia Institute of Technology, Atlanta, GA (1980)Google Scholar
  2. Afzal, W., Torkar, R., Feldt, R.: A systematic review of search-based testing for non-functional system properties. Inf. Softw. Technol. 51(6), 957–976 (2009)CrossRefGoogle Scholar
  3. Ali, S., Briand, L.C., Hemmati, H., Panesar-Walawege, R.K.: A systematic review of the application and empirical investigation of search-based test case generation. IEEE Trans. Softw. Eng. 36(6), 742–762 (2010)CrossRefGoogle Scholar
  4. Ammann, P., Offutt, J.: Introduction to Software Testing. Cambridge University Press, Cambridge (2016)CrossRefGoogle Scholar
  5. Ammann, P., Offutt, J., Huang, H.: Coverage criteria for logical expressions. In: IEEE International Symposium on Software Reliability Engineering (ISSRE), pp. 99–107 (2003)Google Scholar
  6. Andrews, J.H., Briand, L.C., Labiche, Y.: Is mutation an appropriate tool for testing experiments? In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 402–411 (2005)Google Scholar
  7. Arcuri, A.: It really does matter how you normalize the branch distance in search-based software testing. Softw. Test. Verification Reliab. 23(2), 119–147 (2013)CrossRefGoogle Scholar
  8. Arcuri, A., Briand, L.: Adaptive random testing: an illusion of effectiveness? In: ACM International Symposium on Software Testing and Analysis (ISSTA), pp. 265–275 (2011)Google Scholar
  9. Avritzer, A., Weyuker, E.: The automatic generation of load test suites and the assessment of the resulting software. IEEE Trans. Softw. Eng. 21(9), 705–716 (1995)CrossRefGoogle Scholar
  10. Barr, E.T., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41(5), 507–525 (2015)CrossRefGoogle Scholar
  11. Beck, K.: Kent Beck’s Guide to Better Smalltalk: A Sorted Collection, vol. 14. Cambridge University Press, Cambridge (1999)Google Scholar
  12. Beck, K.: Test-Driven Development: By Example. Addison-Wesley Professional, Upper Saddle River (2003)Google Scholar
  13. Bell, J., Kaiser, G.: Unit test virtualization with VMVM. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 550–561. ACM, New York (2014)Google Scholar
  14. Bird, D.L., Munoz, C.U.: Automatic generation of random self-checking test cases. IBM Syst. J. 22(3), 229–245 (1983)CrossRefGoogle Scholar
  15. Briand, L.C., Labiche, Y., He, S.: Automating regression test selection based on UML designs. Inf. Softw. Technol. 51(1), 16–30 (2009)CrossRefGoogle Scholar
  16. Broekman, B., Notenboom, E.: Testing Embedded Software. Pearson Education, Boston (2003)Google Scholar
  17. Bron, A., Farchi, E., Magid, Y., Nir, Y., Ur, S.: Applications of synchronization coverage. In: Proceedings of the Tenth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), pp. 206–212. ACM, New York (2005)Google Scholar
  18. Bühler, O., Wegener, J.: Evolutionary functional testing. Comput. Oper. Res. 35(10), 3144–3160 (2008)CrossRefGoogle Scholar
  19. Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Symposium on Operating Systems Design and Implementation (USENIX), pp. 209–224 (2008)Google Scholar
  20. Cadar, C., Godefroid, P., Khurshid, S., Pǎsǎreanu, C.S., Sen, K., Tillmann, N., Visser, W.: Symbolic execution for software testing in practice: preliminary assessment. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 1066–1071 (2011)Google Scholar
  21. Chen, T.Y., Cheung, S.C., Yiu, S.M.: Metamorphic testing: a new approach for generating next test cases. Department of Computer Science, Hong Kong University of Science and Technology, Technical Report HKUST-CS98-01 (1998)Google Scholar
  22. Chen, T.Y., Lau, M.F., Yu, Y.T.: Mumcut: a fault-based strategy for testing boolean specifications. In: Sixth Asia Pacific Software Engineering Conference (APSEC), pp. 606–613. IEEE, New York (1999)Google Scholar
  23. Chen, T.Y., Leung, H., Mak, I.K.: Adaptive random testing. In: Advances in Computer Science, pp. 320–329 (2004)CrossRefGoogle Scholar
  24. Chilenski, J.J.: An investigation of three forms of the modified condition decision coverage (MCDC) criterion. Technical Report, DTIC Document (2001)Google Scholar
  25. Chilenski, J.J., Miller, S.P.: Applicability of modified condition/decision coverage to software testing. Softw. Eng. J. 9(5), 193–200 (1994)CrossRefGoogle Scholar
  26. Choudhary, S.R., Gorla, A., Orso, A.: Automated test input generation for android: are we there yet?(e). In: IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 429–440. IEEE, New York (2015)Google Scholar
  27. Chow, T.S.: Testing software design modeled by finite-state machines. IEEE Trans. Softw. Eng. 4(3), 178 (1978)zbMATHCrossRefGoogle Scholar
  28. Ciupa, I., Leitner, A., Oriol, M., Meyer, B.: Experimental assessment of random testing for object-oriented software. In: ACM International Symposium on Software Testing and Analysis (ISSTA), pp. 84–94 (2007)Google Scholar
  29. Claessen, K., Hughes, J.: QuickCheck: a lightweight tool for random testing of Haskell programs. ACM Sigplan Not. 46(4), 53–64 (2011)CrossRefGoogle Scholar
  30. Clarke, D., Lee, I.: Testing real-time constraints in a process algebraic setting. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 51–60. ACM, New York (1995)Google Scholar
  31. Clarke, L.A., Rosenblum, D.S.: A historical perspective on runtime assertion checking in software development. ACM SIGSOFT Softw. Eng. Notes 31(3), 25–37 (2006)CrossRefGoogle Scholar
  32. Clarke, E.M., Grumberg, O., Peled, D.: Model Checking. MIT Press, Cambridge (1999)Google Scholar
  33. Cohen, D.M., Dalal, S.R., Parelius, J., Patton, G.C.: The combinatorial design approach to automatic test generation. IEEE Softw. 13(5), 83 (1996)CrossRefGoogle Scholar
  34. Cohen, D.M., Dalal, S.R., Fredman, M.L., Patton, G.C.: The AETG system: an approach to testing based on combinatorial design. IEEE Trans. Softw. Eng. 23(7), 437–444 (1997)CrossRefGoogle Scholar
  35. Cohen, M., Gibbons, P., Mugridge, W., Colbourn, C.: Constructing test suites for interaction testing. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 38–48 (2003)Google Scholar
  36. Commission, I.E., et al.: IEC 61508: functional safety of electrical. Electronic/Programmable Electronic Safety-Related Systems (1999)Google Scholar
  37. DeMillo, R.A., Offutt, A.J.: Constraint-based automatic test data generation. IEEE Trans. Softw. Eng. 17(9), 900–910 (1991)CrossRefGoogle Scholar
  38. DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on test data selection: help for the practicing programmer. Computer 11(4), 34–41 (1978)CrossRefGoogle Scholar
  39. Derderian, K., Hierons, R.M., Harman, M., Guo, Q.: Automated unique input output sequence generation for conformance testing of FSMs. Comput. J. 49(3), 331–344 (2006)CrossRefGoogle Scholar
  40. Di Lucca, G.A., Fasolino, A.R.: Testing web-based applications: the state of the art and future trends. Inf. Softw. Technol. 48(12), 1172–1186 (2006)CrossRefGoogle Scholar
  41. Dowson, M.: The Ariane 5 software failure. ACM SIGSOFT Softw. Eng. Notes 22(2), 84 (1997)CrossRefGoogle Scholar
  42. Duran, J.W., Ntafos, S.C.: An evaluation of random testing. IEEE Trans. Softw. Eng. SE-10(4), 438–444 (1984)CrossRefGoogle Scholar
  43. Edelstein, O., Farchi, E., Nir, Y., Ratsaby, G., Ur, S.: Multithreaded java program test generation. IBM Syst. J. 41(1), 111–125 (2002)CrossRefGoogle Scholar
  44. Elbaum, S., Malishevsky, A.G., Rothermel, G.: Test case prioritization: a family of empirical studies. IEEE Trans. Softw. Eng. 28(2), 159–182 (2002)CrossRefGoogle Scholar
  45. Faught, D.R.: Keyword-driven testing. Sticky minds. https://www.stickyminds.com/article/keyword-driven-testing [online]. Retrieved 28.10.2018
  46. Ferguson, R., Korel, B.: The chaining approach for software test data generation. ACM Trans. Softw. Eng. Methodol. 5(1), 63–86 (1996)CrossRefGoogle Scholar
  47. Forrester, J.E., Miller, B.P.: An empirical study of the robustness of windows nt applications using random testing. In: Proceedings of the 4th Conference on USENIX Windows Systems Symposium - Volume 4, WSS’00, p. 6. USENIX Association, Berkeley (2000)Google Scholar
  48. Fosdick, L.D., Osterweil, L.J.: Data flow analysis in software reliability. ACM Comput. Surv. 8(3), 305–330 (1976)MathSciNetzbMATHCrossRefGoogle Scholar
  49. Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Softw. Eng. 39(2), 276–291 (2013)CrossRefGoogle Scholar
  50. Fraser, G., Walkinshaw, N.: Assessing and generating test sets in terms of behavioural adequacy. Softw. Test. Verification Reliab. 25(8), 749–780 (2015)CrossRefGoogle Scholar
  51. Fraser, G., Wotawa, F.: Redundancy Based Test-Suite Reduction, pp. 291–305. Springer, Heidelberg (2007)Google Scholar
  52. Fraser, G., Zeller, A.: Mutation-driven generation of unit tests and oracles. IEEE Trans. Softw. Eng. 28(2), 278–292 (2012)CrossRefGoogle Scholar
  53. Fraser, G., Wotawa, F., Ammann, P.E.: Testing with model checkers: a survey. Softw. Test. Verification Reliab. 19(3), 215–261 (2009)CrossRefGoogle Scholar
  54. Fraser, G., Staats, M., McMinn, P., Arcuri, A., Padberg, F.: Does automated unit test generation really help software testers? a controlled empirical study. ACM Trans. Softw. Eng. Methodol. 24(4), 23 (2015)CrossRefGoogle Scholar
  55. Fujiwara, S., Bochmann, G.V., Khendek, F., Amalou, M., Ghedamsi, A.: Test selection based on finite state models. IEEE Trans. Softw. Eng. 17(6), 591–603 (1991)CrossRefGoogle Scholar
  56. Gambi, A., Bell, J., Zeller, A.: Practical test dependency detection. In: 2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST), Västerås, Sweden, pp. 1–11 (2018)Google Scholar
  57. Gargantini, A., Heitmeyer, C.: Using model checking to generate tests from requirements specifications. In: ACM Symposium on the Foundations of Software Engineering (FSE), pp. 146–162. Springer, Berlin (1999)CrossRefGoogle Scholar
  58. Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. SIGPLAN Not. 40(6), 213–223 (2005)CrossRefGoogle Scholar
  59. Godefroid, P., Kiezun, A., Levin, M.Y.: Grammar-based whitebox fuzzing. ACM Sigplan Not. 43(6), 206–215 (2008a)CrossRefGoogle Scholar
  60. Godefroid, P., Levin, M., Molnar, D.: Automated whitebox fuzz testing. In: Network and Distributed System Security Symposium (NDSS), pp. 1–16 (2008b)Google Scholar
  61. Goodenough, J.B., Gerhart, S.L.: Toward a theory of test data selection. IEEE Trans. Softw. Eng. SE-1(2), 156–173 (1975)MathSciNetCrossRefGoogle Scholar
  62. Gotlieb, A., Botella, B., Rueher, M.: Automatic test data generation using constraint solving techniques. ACM SIGSOFT Softw. Eng. Notes 23(2), 53–62 (1998)CrossRefGoogle Scholar
  63. Graves, T.L., Harrold, M.J., Kim, J.M., Porter, A., Rothermel, G.: An empirical study of regression test selection techniques. ACM Trans. Softw. Eng. Methodol. 10(2), 184–208 (2001)zbMATHCrossRefGoogle Scholar
  64. Grechanik, M., Fu, C., Xie, Q.: Automatically finding performance problems with feedback-directed learning software testing. In: Proceedings of the 34th International Conference on Software Engineering, pp. 156–166. IEEE Press, New York (2012)Google Scholar
  65. Grindal, M., Offutt, J., Andler, S.: Combination testing strategies: a survey. Softw. Test. Verification Reliab. 15(3), 167–199 (2005)CrossRefGoogle Scholar
  66. Gross, F., Fraser, G., Zeller, A.: Search-based system testing: high coverage, no false alarms. In: ACM International Symposium on Software Testing and Analysis (2012)Google Scholar
  67. Gutjahr, W.J.: Partition testing vs. random testing: the influence of uncertainty. IEEE Trans. Softw. Eng. 25(5), 661–674 (1999)CrossRefGoogle Scholar
  68. Gyori, A., Shi, A., Hariri, F., Marinov, D.: Reliable testing: detecting state-polluting tests to prevent test dependency. In: Proceedings of the 2015 International Symposium on Software Testing and Analysis (ISSTA), pp. 223–233. ACM, New York (2015)Google Scholar
  69. Hamlet, R.: Random testing. In: Encyclopedia of Software Engineering, pp. 970–978. Wiley, New York (1994)Google Scholar
  70. Hamlet, D., Taylor, R.: Partition testing does not inspire confidence. IEEE Trans. Softw. Eng. 16(12), 1402–1411 (1990)MathSciNetCrossRefGoogle Scholar
  71. Hammoudi, M., Rothermel, G., Tonella, P.: Why do record/replay tests of web applications break? In: IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 180–190. IEEE, New York (2016)Google Scholar
  72. Hanford, K.V.: Automatic generation of test cases. IBM Syst. J. 9(4), 242–257 (1970)CrossRefGoogle Scholar
  73. Hao, S., Li, D., Halfond, W.G., Govindan, R.: Estimating mobile application energy consumption using program analysis. In: 35th International Conference on Software Engineering, pp. 92–101. IEEE, New York (2013)Google Scholar
  74. Harman, M., Clark, J.: Metrics are fitness functions too. In: Proceedings. 10th International Symposium on, Software Metrics, pp. 58–69. IEEE, New York (2004)Google Scholar
  75. Harman, M., Jones, B.F.: Search-based software engineering. Inf. Softw. Technol. 43(14), 833–839 (2001)CrossRefGoogle Scholar
  76. Harrold, M.J., Rothermel, G.: Performing data flow testing on classes. ACM SIGSOFT Softw. Eng. Notes 19(5), 154–163 (1994)CrossRefGoogle Scholar
  77. Harrold, M.J., Gupta, R., Soffa, M.L.: A methodology for controlling the size of a test suite. ACM Trans. Softw. Eng. Methodol. 2(3), 270–285 (1993)CrossRefGoogle Scholar
  78. Herzig, K., Nagappan, N.: Empirically detecting false test alarms using association rules. In: ICSE SEIP (2015)Google Scholar
  79. Herzig, K., Greiler, M., Czerwonka, J., Murphy, B.: The art of testing less without sacrificing quality. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 483–493. IEEE Press, New York (2015)Google Scholar
  80. Holzmann, G.J.: The model checker spin. IEEE Trans. Softw. Eng. 23(5), 279–295 (1997)CrossRefGoogle Scholar
  81. Holzmann, G.J., H Smith, M.: Software model checking: extracting verification models from source code. Softw. Test. Verification Reliab. 11(2), 65–79 (2001)CrossRefGoogle Scholar
  82. Howden, W.E.: Reliability of the path analysis testing strategy. IEEE Trans. Softw. Eng. SE-2(3), 208–215 (1976)MathSciNetzbMATHCrossRefGoogle Scholar
  83. Hsu, H., Orso, A.: MINTS: a general framework and tool for supporting test-suite minimization. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 419–429 (2009)Google Scholar
  84. Huang, J.C.: An approach to program testing. ACM Comput. Surv. 7(3), 113–128 (1975)zbMATHCrossRefGoogle Scholar
  85. Humble, J., Farley, D.: Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Pearson Education, Boston (2010)Google Scholar
  86. Inozemtseva, L., Holmes, R.: Coverage is not strongly correlated with test suite effectiveness. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 435–445. ACM, New York (2014)Google Scholar
  87. Jia, Y., Harman, M.: An analysis and survey of the development of mutation testing. Technical Report TR-09-06, CREST Centre, King’s College London, London (2009)Google Scholar
  88. Jia, Y., Harman, M.: An analysis and survey of the development of mutation testing. IEEE Trans. Softw. Eng. 37(5), 649–678 (2011)CrossRefGoogle Scholar
  89. Jiang, B., Zhang, Z., Chan, W., Tse, T.: Adaptive random test case prioritization. In: IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 233–244 (2009)Google Scholar
  90. Johnson, L.A., et al.: Do-178b, software considerations in airborne systems and equipment certification. Crosstalk, October 1998Google Scholar
  91. Jones, B.F., Sthamer, H.H., Eyres, D.E.: Automatic structural testing using genetic algorithms. Softw. Eng. J. 11(5), 299–306 (1996)CrossRefGoogle Scholar
  92. Just, R., Schweiggert, F., Kapfhammer, G.: Major: an efficient and extensible tool for mutation analysis in a Java compiler. In: IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 612–615 (2011)Google Scholar
  93. Just, R., Jalali, D., Inozemtseva, L., Ernst, M., Holmes, R., Fraser, G.: Are mutants a valid substitute for real faults in software testing? In: ACM Symposium on the Foundations of Software Engineering (FSE), pp. 654–665 (2014)Google Scholar
  94. Kane, S., Liberman, E., DiViesti, T., Click, F.: Toyota Sudden Unintended Acceleration. Safety Research & Strategies, Rehoboth (2010)Google Scholar
  95. Kaner, C.: Exploratory testing. In: Quality Assurance Institute Worldwide Annual Software Testing Conference (2006)Google Scholar
  96. King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MathSciNetzbMATHCrossRefGoogle Scholar
  97. Korel, B.: Automated software test data generation. IEEE Trans. Softw. Eng. 16, 870–879 (1990)CrossRefGoogle Scholar
  98. Korel, B., Tahat, L.H., Vaysburg, B.: Model based regression test reduction using dependence analysis. In: IEEE International Conference on Software Maintenance (ICSM), pp. 214–223 (2002)Google Scholar
  99. Korel, B., Tahat, L.H., Harman, M.: Test prioritization using system models. In: IEEE International Conference on Software Maintenance (ICSM), pp. 559–568 (2005)Google Scholar
  100. Krichen, M., Tripakis, S.: Conformance testing for real-time systems. Formal Methods Syst. Des. 34(3), 238–304 (2009)zbMATHCrossRefGoogle Scholar
  101. Kuhn, D., Wallace, D., Gallo, A. Jr.: Software fault interactions and implications for software testing. IEEE Trans. Softw. Eng. 30(6), 418–421 (2004)CrossRefGoogle Scholar
  102. Labuschagne, A., Inozemtseva, L., Holmes, R.: Measuring the cost of regression testing in practice: a study of java projects using continuous integration. In: Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ESEC/FSE, pp. 821–830. ACM, New York (2017)Google Scholar
  103. Lee, D., Yannakakis, M.: Principles and methods of testing finite state machines-a survey. Proc. IEEE 84(8), 1090–1123 (1996)CrossRefGoogle Scholar
  104. Leitner, A., Oriol, M., Zeller, A., Ciupa, I., Meyer, B.: Efficient unit test case minimization. In: IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 417–420 (2007)Google Scholar
  105. Leveson, N.G., Turner, C.S.: An investigation of the Therac-25 accidents. Computer 26(7), 18–41 (1993)CrossRefGoogle Scholar
  106. Li, Z., Harman, M., Hierons, R.M.: Search algorithms for regression test case prioritization. IEEE Trans. Softw. Eng. 33(4), 225–237 (2007)CrossRefGoogle Scholar
  107. Lieberman, H., Paternò, F., Klann, M., Wulf, V.: End-user development: an emerging paradigm. In: End User Development, pp. 1–8. Springer, Berlin (2006)Google Scholar
  108. Liggesmeyer, P., Trapp. M.: Trends in embedded software engineering. IEEE Softw. 26(3), 19–25 (2009)CrossRefGoogle Scholar
  109. Lu, S., Jiang, W., Zhou, Y.: A study of interleaving coverage criteria. In: The 6th Joint Meeting on European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering: Companion Papers, pp. 533–536. ACM, New York (2007)Google Scholar
  110. Lu, S., Park, S., Seo, E., Zhou, Y.: Learning from mistakes: a comprehensive study on real world concurrency bug characteristics. ACM Sigplan Not. 43(3), 329–339 (2008)CrossRefGoogle Scholar
  111. Luo, Q., Hariri, F., Eloussi, L., Marinov, D.: An empirical analysis of flaky tests. In: ACM Symposium on the Foundations of Software Engineering (FSE), pp. 643–653. ACM, New York (2014)Google Scholar
  112. Mansour, N., Bahsoon, R., Baradhi, G.: Empirical comparison of regression test selection algorithms. J. Syst. Softw. 57(1), 79–90 (2001)CrossRefGoogle Scholar
  113. McMillan, K.L.: Symbolic model checking. In: Symbolic Model Checking, pp. 25–60. Springer, Berlin (1993)CrossRefGoogle Scholar
  114. McMinn, P.: Search-based software test data generation: a survey. Softw. Test. Verification Reliab. 14(2), 105–156 (2004)CrossRefGoogle Scholar
  115. Memon, A.M., Soffa, M.L., Pollack, M.E.: Coverage criteria for gui testing. ACM SIGSOFT Softw. Eng. Notes 26(5), 256–267 (2001)CrossRefGoogle Scholar
  116. Memon, A., Banerjee, I., Nagarajan, A.: What test oracle should i use for effective gui testing? In: 18th IEEE International Conference on Automated Software Engineering, 2003. Proceedings, pp. 164–173. IEEE, New York (2003a)Google Scholar
  117. Memon, A.M., Banerjee, I., Nagarajan, A.: Gui ripping: reverse engineering of graphical user interfaces for testing. In: WCRE, vol. 3, p. 260 (2003b)Google Scholar
  118. Memon, A., Gao, Z., Nguyen, B., Dhanda, S., Nickell, E., Siemborski, R., Micco, J.: Taming Google-scale continuous testing. In: 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP), pp. 233–242 (2017)Google Scholar
  119. Meyer, B.: Applying’design by contract’. Computer 25(10), 40–51 (1992)CrossRefGoogle Scholar
  120. Michael, C.C., McGraw, G., Schatz, M.A.: Generating software test data by evolution. IEEE Trans. Softw. Eng. 27(12), 1085–1110 (2001)CrossRefGoogle Scholar
  121. Miller, J.C., Maloney, C.J.: Systematic mistake analysis of digital computer programs. Commun. ACM 6(2), 58–63 (1963)zbMATHCrossRefGoogle Scholar
  122. Miller, W., Spooner, D.L.: Automatic generation of floating-point test data. IEEE Trans. Softw. Eng. 2(3), 223–226 (1976)MathSciNetCrossRefGoogle Scholar
  123. Miller, B.P., Fredriksen, L., So, B.: An empirical study of the reliability of unix utilities. Commun. ACM 33(12), 32–44 (1990)CrossRefGoogle Scholar
  124. Mirarab, S., Akhlaghi, S., Tahvildari, L.: Size-constrained regression test case selection using multicriteria optimization. IEEE Trans. Softw. Eng. 38(4), 936–956 (2012)CrossRefGoogle Scholar
  125. Musuvathi, M., Qadeer, S., Ball, T., Basler, G., Nainar, P.A., Neamtiu, I.: Finding and reproducing Heisenbugs in concurrent programs. In: Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation (OSDI), pp. 267–280. USENIX Association, Berkeley (2008)Google Scholar
  126. Myers, G.: The Art of Software Testing. Wiley, New York (1979)Google Scholar
  127. Nie, C., Leung, H.: A survey of combinatorial testing. ACM Comput. Surv. 43(2), 1–29 (2011). Article 11zbMATHCrossRefGoogle Scholar
  128. Nielsen, B., Skou, A.: Automated test generation from timed automata. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 343–357. Springer, Berlin (2001)CrossRefGoogle Scholar
  129. Nistor, A., Luo, Q., Pradel, M., Gross, T.R., Marinov, D.: BALLERINA: automatic generation and clustering of efficient random unit tests for multithreaded code. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 727–737 (2012)Google Scholar
  130. North, D.: Behavior modification: the evolution of behavior-driven development. Better Softw. 8(3) (2006). https://www.stickyminds.com/better-software-magazine-volume-issue/2006-03
  131. Ntafos, S.C.: A comparison of some structural testing strategies. IEEE Trans. Softw. Eng. 14(6), 868 (1988)CrossRefGoogle Scholar
  132. Offutt, J., Abdurazik, A.: Generating tests from UML specifications. In: International Conference on the Unified Modeling Language: Beyond the Standard (UML), pp. 416–429. Springer, Berlin (1999)Google Scholar
  133. Offutt, A.J., Untch, R.H.: Mutation 2000: uniting the orthogonal. In: Wong, W.E. (ed.) Mutation Testing for the New Century, pp. 34–44. Kluwer Academic Publishers, Boston (2001)CrossRefGoogle Scholar
  134. Offutt, A.J, Rothermel, G., Zapf, C.: An experimental evaluation of selective mutation. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 100–107 (1993)Google Scholar
  135. Osterweil, L.J., Fosdick, L.D.: Dave–a validation error detection and documentation system for fortran programs. Softw. Pract. Exp. 6(4), 473–486 (1976)zbMATHCrossRefGoogle Scholar
  136. Ostrand, T.J., Balcer, M.J.: The category-partition method for specifying and generating functional tests. Commun. ACM 31(6), 676–686 (1988)CrossRefGoogle Scholar
  137. Pacheco, C., Ernst, M.D.: Randoop: feedback-directed random testing for Java. In: ACM SIGPLAN Conference on Object-Oriented Programming Systems and Application (OOPSLA Companion), pp. 815–816. ACM, New York (2007)Google Scholar
  138. Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 75–84 (2007)Google Scholar
  139. Panichella, A., Oliveto, R., Penta, M.D., Lucia, A.D.: Improving multi-objective test case selection by injecting diversity in genetic algorithms. IEEE Trans. Softw. Eng. 41(4), 358–383 (2015)CrossRefGoogle Scholar
  140. Pargas, R.P., Harrold, M.J., Peck, R.: Test-data generation using genetic algorithms. Softw. Test. Verification Reliab. 9(4), 263–282 (1999)CrossRefGoogle Scholar
  141. Potter, B., McGraw, G.: Software security testing. IEEE Secur. Priv. 2(5), 81–85 (2004)CrossRefGoogle Scholar
  142. Rapps, S., Weyuker, E.J.: Selecting software test data using data flow information. IEEE Trans. Softw. Eng. (4), 367–375 (1985).  https://doi.org/10.1109/TSE.1985.232226 zbMATHCrossRefGoogle Scholar
  143. Rothermel, G., Harrold, M.J.: A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Methodol. 6(2), 173–210 (1997)CrossRefGoogle Scholar
  144. Rothermel, G., Harrold, M.J., Ostrin, J., Hong, C.: An empirical study of the effects of minimization on the fault detection capabilities of test suites. In: IEEE International Conference on Software Maintenance (ICSM), pp. 34–43 (1998)Google Scholar
  145. Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. IEEE Trans. Softw. Eng. 27(10), 929–948 (2001)CrossRefGoogle Scholar
  146. Sabnani, K., Dahbura, A.: A protocol test generation procedure. Comput Netw. ISDN Syst. 15(4), 285–297 (1988)CrossRefGoogle Scholar
  147. Segura, S., Fraser, G., Sanchez, A., Ruiz-Cortes, A.: A survey on metamorphic testing. IEEE Trans. Softw. Eng. (TSE) 42(9), 805–824 (2016)CrossRefGoogle Scholar
  148. Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: ACM Symposium on the Foundations of Software Engineering, pp. 263–272. ACM, New York (2005)CrossRefGoogle Scholar
  149. Shull, F., Melnik, G., Turhan, B., Layman, L., Diep, M., Erdogmus, H.: What do we know about test-driven development? IEEE Softw. 27(6), 16–19 (2010). https://doi.org/10.1109/MS.2010.152 CrossRefGoogle Scholar
  150. Sirer, E.G., Bershad, B.N.: Using production grammars in software testing. In: ACM SIGPLAN Notices, vol. 35, pp. 1–13. ACM, New York (1999)CrossRefGoogle Scholar
  151. Steenbuck, S., Fraser, G.: Generating unit tests for concurrent classes. In: IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 144–153. IEEE, New York (2013)Google Scholar
  152. Stocks, P., Carrington, D.: A framework for specification-based testing. IEEE Trans. Softw. Eng. 22(11), 777–793 (1996)CrossRefGoogle Scholar
  153. Sutton, M., Greene, A., Amini, P.: Fuzzing: Brute Force Vulnerability Discovery. Pearson Education, Boston (2007)Google Scholar
  154. Tassey, G.: The economic impacts of inadequate infrastructure for software testing, final report. National Institute of Standards and Technology (2002)Google Scholar
  155. Tikir, M.M., Hollingsworth, J.K.: Efficient instrumentation for code coverage testing. ACM SIGSOFT Softw. Eng. Notes 27(4), 86–96 (2002)CrossRefGoogle Scholar
  156. Tonella, P.: Evolutionary testing of classes. In: ACM International Symposium on Software Testing and Analysis (ISSTA), pp. 119–128 (2004)Google Scholar
  157. Tracey, N., Clark, J., Mander, K., McDermid, J.A.: An automated framework for structural test-data generation. In: IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 285–288 (1998)Google Scholar
  158. Tretmans, J.: Test generation with inputs, outputs and repetitive quiescence (1996)Google Scholar
  159. Tretmans, J., Brinksma, E.: TorX: automated model-based testing. In: First European Conference on Model-Driven Software Engineering, pp. 31–43 (2003)Google Scholar
  160. Untch, R.H., Offutt, A.J., Harrold, M.J.: Mutation analysis using mutant schemata. ACM SIGSOFT Softw. Eng. Notes 18(3), 139–148 (1993)CrossRefGoogle Scholar
  161. Utting, M., Legeard, B.: Practical Model-Based Testing: A Tools Approach. Morgan Kaufmann, San Francisco (2010)Google Scholar
  162. Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches. Softw. Test. Verif. Reliab. 22(5), 297–312 (2012)CrossRefGoogle Scholar
  163. Visser, W., Havelund, K., Brat, G., Park, S., Lerda, F.: Model checking programs. Autom. Softw. Eng. 10(2), 203–232 (2003)CrossRefGoogle Scholar
  164. Visser, W., Pasareanu, C.S., Khurshid, S.: Test input generation with Java PathFinder. ACM SIGSOFT Softw. Eng. Notes 29(4), 97–107 (2004)CrossRefGoogle Scholar
  165. Wegener, J., Baresel, A., Sthamer, H.: Evolutionary test environment for automatic structural testing. Inf. Softw. Technol. 43(14), 841–854 (2001)CrossRefGoogle Scholar
  166. Weyuker, E.J.: On testing non-testable programs. Comput. J. 25(4), 465–470 (1982)CrossRefGoogle Scholar
  167. Weyuker, E.J.: Assessing test data adequacy through program inference. ACM Trans. Program. Lang. Syst. 5(4), 641–655 (1983)zbMATHCrossRefGoogle Scholar
  168. Weyuker, E., Goradia, T., Singh, A.: Automatically generating test data from a boolean specification. IEEE Trans. Softw. Eng. 20(5), 353 (1994)zbMATHCrossRefGoogle Scholar
  169. White, L.J., Cohen, E.I.: A domain strategy for computer program testing. IEEE Trans. Softw. Eng. 6(3), 247–257 (1980)zbMATHCrossRefGoogle Scholar
  170. Whittaker, J.A., Arbon, J., Carollo, J.: How Google Tests Software. Addison-Wesley, Upper Saddle River (2012)Google Scholar
  171. Wong, W.E., Horgan, J.R., London, S., Mathur, A.P.: Effect of test set minimization on fault detection effectiveness. Softw. Pract. Exper. 28(4), 347–369 (1998)CrossRefGoogle Scholar
  172. Xanthakis, S., Ellis, C., Skourlas, C., Le Gall, A., Katsikas, S., Karapoulios, K.: Application of genetic algorithms to software testing. In: Proceedings of the 5th International Conference on Software Engineering and Applications, pp. 625–636 (1992)Google Scholar
  173. Yang, C.S.D., Souter, A.L., Pollock, L.L.: All-du-path coverage for parallel programs. ACM SIGSOFT Softw. Eng. Notes 23(2), 153–162 (1998)CrossRefGoogle Scholar
  174. Yoo, S., Harman, M.: Pareto efficient multi-objective test case selection. In: ACM International Symposium on Software Testing and Analysis (ISSTA), pp. 140–150. ACM, New York (2007)Google Scholar
  175. Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22(2), 67–120 (2012)CrossRefGoogle Scholar
  176. Young, M., Pezze, M.: Software Testing and Analysis: Process, Principles and Techniques. Wiley, New York (2005)Google Scholar
  177. Yuan, X., Cohen, M.B., Memon, A.M.: GUI interaction testing: incorporating event context. IEEE Trans. Softw. Eng. 37(4), 559–574 (2011)CrossRefGoogle Scholar
  178. Zhang, P., Elbaum, S., Dwyer, M.B.: Automatic generation of load tests. In: Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering, pp. 43–52. IEEE Computer Society, Washington (2011)Google Scholar
  179. Zhang, S., Jalali, D., Wuttke, J., Muşlu, K., Lam, W., Ernst, M.D., Notkin, D.: Empirically revisiting the test independence assumption. In: ACM International Symposium on Software Testing and Analysis (ISSTA), pp. 385–396. ACM, New York (2014)Google Scholar
  180. Zhu, H., Hall, P.A.V., May, J.H.R.: Software unit test coverage and adequacy. ACM Comput. Surv. 29(4), 366–427 (1997)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of PassauPassauGermany
  2. 2.University of LeicesterLeicesterUK

Personalised recommendations