Skip to main content

Towards Efficient Data-Flow Test Data Generation

  • Chapter
  • First Online:
Theories of Programming and Formal Methods

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14080))

Abstract

Data-flow testing (DFT) aims to detect potential data interaction anomalies by focusing on the points at which variables receive values and the points at which these values are used. Such test objectives are referred as def-use pairs. However, the complexity of DFT still overwhelms the testers in practice. To tackle this problem, we introduce a hybrid testing framework for data-flow based test generation: (1) The core of our framework is symbolic execution (SE), enhanced by a novel guided path exploration strategy to improve testing performance; and (2) we systematically cast DFT as reachability checking in software model checking (SMC) to complement SE, yielding practical DFT that combines the two techniques’ strengths. We implemented our framework for C programs on top of the state-of-the-art symbolic execution engine KLEE and instantiated with three different software model checkers. Our evaluation on the 28,354 def-use pairs collected from 33 open-source and industrial program subjects shows that (1) our SE-based approach can improve DFT performance by 15–48% in terms of testing time, compared with existing search strategies; and (2) our combined approach can further reduce testing time by 20.1–93.6%, and improve data-flow coverage by 27.8–45.2% by eliminating infeasible test objectives. This combined approach also enables the cross-checking of each component for reliable and robust testing results.

This paper was completed on Jifeng He’s 80th birthday, May 2023.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    In this paper, we focus on the problem of classic data-flow testing [39, 89], i.e., finding an input for a given def-use pair at one time. We do not consider the case where some pairs can be accidentally covered when targeting one pair, since this has already been investigated in collateral coverage-based approach [65, 66].

  2. 2.

    We follow the all def-use coverage defined by Rapps and Weyuker [75, 76], since almost all of the literature that followed uses or extends this definition, as revealed by a recent survey [86].

  3. 3.

    An pending path indicates a not fully-explored path (corresponding to an unterminated state).

  4. 4.

    We use the line number followed by T or F to denote the true or false branch of the if statement at the corresponding line.

  5. 5.

    We use CIL as the C parser to transform the source code into an equivalent simplified form using the –dosimplify option, where one statement contains at most one operator.

  6. 6.

    We use the latest versions of these model checkers at the time of writing.

  7. 7.

    Cyclomatic complexity is a software metric that indicates the complexity of a program. The standard software development guidelines recommend the cyclomatic complexity of a module should not exceeded 10.

  8. 8.

    SIQR = (Q3-Q1)/2, which measures the variability of testing time, where Q1 is the lower quartile, and Q3 is the upper quartile.

  9. 9.

    In theory, the symbolic execution-based approach cannot identify infeasible pairs unless it enumerates all possible paths, which however is impossible in practice. Therefore, we only consider the testing times of covered (feasible) pairs for performance evaluation.

  10. 10.

    BLAST hangs on totinfo, and CPAchecker crashes on parts of pairs from cdaudio.

References

  1. Cyclo. http://www.gentoogeek.org/cyclo.html

  2. ALDanial: cloc. GitHub (2018)

    Google Scholar 

  3. Alexander, R.T., Offutt, J., Stefik, A.: Testing coupling relationships in object-oriented programs. Softw. Test. Verif. Reliab. 20(4), 291–327 (2010)

    Article  Google Scholar 

  4. Ammann, P., Offutt, J.: Introduction to Software Testing, 1st edn. Cambridge University Press, New York (2008)

    Book  Google Scholar 

  5. Ball, T., Rajamani, S.K.: The SLAM project: debugging system software via static analysis. In: Conference Record of POPL 2002: The 29th SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Portland, OR, USA, 16–18 January 2002, pp. 1–3 (2002)

    Google Scholar 

  6. Baluda, M., Braione, P., Denaro, G., Pezzè, M.: Structural coverage of feasible code. In: The 5th Workshop on Automation of Software Test, AST 2010, 3–4 May 2010, Cape Town, South Africa, pp. 59–66 (2010)

    Google Scholar 

  7. Baluda, M., Braione, P., Denaro, G., Pezzè, M.: Enhancing structural software coverage by incrementally computing branch executability. Software Qual. J. 19(4), 725–751 (2011)

    Article  Google Scholar 

  8. Baluda, M., Denaro, G., Pezzè, M.: Bidirectional symbolic analysis for effective branch testing. IEEE Trans. Software Eng. 42(5), 403–426 (2016)

    Article  Google Scholar 

  9. Bardin, S., et al.: Sound and quasi-complete detection of infeasible test requirements. In: 8th IEEE International Conference on Software Testing, Verification and Validation, ICST 2015, Graz, Austria, 13–17 April 2015, pp. 1–10 (2015)

    Google Scholar 

  10. Beckman, N.E., Nori, A.V., Rajamani, S.K., Simmons, R.J., Tetali, S., Thakur, A.V.: Proofs from tests. IEEE Trans. Software Eng. 36(4), 495–508 (2010)

    Article  Google Scholar 

  11. Beyer, D.: Competition on software verification – (SV-COMP). In: Flanagan, C., König, B. (eds.) TACAS 2012. LNCS, vol. 7214, pp. 504–524. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28756-5_38

    Chapter  Google Scholar 

  12. Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: Proceedings of the 26th International Conference on Software Engineering, ICSE 2004, pp. 326–335. IEEE Computer Society, Washington, DC (2004)

    Google Scholar 

  13. Beyer, D., Henzinger, T.A., Jhala, R., Majumdar, R.: The software model checker BLAST: applications to software engineering. Int. J. Softw. Tools Technol. Transf. 9(5), 505–525 (2007)

    Article  Google Scholar 

  14. Beyer, D., Keremoglu, M.E.: CPAchecker: a tool for configurable software verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_16

    Chapter  Google Scholar 

  15. Beyer, D., Lemberger, T.: Software verification: testing vs. model checking - a comparative evaluation of the state of the art. In: Strichman, O., Tzoref-Brill, R. (eds.) HVC 2017. LNCS, vol. 10629, pp. 99–114. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70389-3_7

    Chapter  Google Scholar 

  16. Burnim, J., Sen, K.: Heuristics for scalable dynamic test generation. In: 23rd IEEE/ACM International Conference on Automated Software Engineering (ASE 2008), 15–19 September 2008, L’Aquila, Italy, pp. 443–446 (2008)

    Google Scholar 

  17. Buy, U.A., Orso, A., Pezzè, M.: Automated testing of classes. In: ISSTA, pp. 39–48 (2000)

    Google Scholar 

  18. Cadar, C., Dunbar, D., Engler, D.R.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: USENIX Symposium on Operating Systems Design and Implementation, pp. 209–224 (2008)

    Google Scholar 

  19. Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: EXE: automatically generating inputs of death. In: Proceedings of the 13th ACM Conference on Computer and Communications Security, CCS 2006, Alexandria, VA, USA, 30 October–3 November 2006, pp. 322–335 (2006)

    Google Scholar 

  20. Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. Commun. ACM 56(2), 82–90 (2013)

    Article  Google Scholar 

  21. Chaki, S., Clarke, E.M., Groce, A., Jha, S., Veith, H.: Modular verification of software components in C. In: Proceedings of the 25th International Conference on Software Engineering, 3–10 May 2003, Portland, Oregon, USA, pp. 385–395 (2003)

    Google Scholar 

  22. Chaki, S., Clarke, E.M., Groce, A., Jha, S., Veith, H.: Modular verification of software components in C. IEEE Trans. Software Eng. 30(6), 388–402 (2004)

    Article  Google Scholar 

  23. Chatterjee, B., Ryder, B.G.: Data-flow-based testing of object-oriented libraries. Technical report DCS-TR-382, Rutgers University (1999)

    Google Scholar 

  24. Clarke, E., Kroening, D., Lerda, F.: A tool for checking ANSI-C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24730-2_15

    Chapter  MATH  Google Scholar 

  25. Clarke, E.M., Kroening, D., Lerda, F.: A tool for checking ANSI-C programs. In: Proceedings of the Tools and Algorithms for the Construction and Analysis of Systems, 10th International Conference, TACAS 2004, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2004, Barcelona, Spain, 29 March–2 April 2004, pp. 168–176 (2004)

    Google Scholar 

  26. Clarke, L.A.: A program testing system. In: Proceedings of the 1976 Annual Conference, Houston, Texas, USA, 20–22 October 1976, pp. 488–491 (1976)

    Google Scholar 

  27. Clarke, L.A., Podgurski, A., Richardson, D.J., Zeil, S.J.: A formal evaluation of data flow path selection criteria. IEEE Trans. Software Eng. 15(11), 1318–1332 (1989)

    Article  Google Scholar 

  28. Daca, P., Gupta, A., Henzinger, T.A.: Abstraction-driven Concolic testing. In: Jobstmann, B., Leino, K.R.M. (eds.) VMCAI 2016. LNCS, vol. 9583, pp. 328–347. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49122-5_16

    Chapter  Google Scholar 

  29. Denaro, G., Margara, A., Pezzè, M., Vivanti, M.: Dynamic data flow testing of object oriented systems. In: 37th IEEE/ACM International Conference on Software Engineering, ICSE 2015, Florence, Italy, 16–24 May 2015, vol. 1, pp. 947–958 (2015)

    Google Scholar 

  30. Denaro, G., Pezzè, M., Vivanti, M.: On the right objectives of data flow testing. In: IEEE Seventh International Conference on Software Testing, Verification and Validation, ICST 2014, 31 March–4 April 2014, Cleveland, Ohio, USA, pp. 71–80 (2014)

    Google Scholar 

  31. Do, T., Fong, A.C.M., Pears, R.: Precise guidance to dynamic test generation. In: Proceedings of the 7th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE), pp. 5–12 (2012)

    Google Scholar 

  32. Eler, M.M., Endo, A.T., Durelli, V., Procópio-PR, C.: Covering user-defined data-flow test requirements using symbolic execution. In: Proceedings of the Thirteenth Brazilian Symposium On Software Quality (SBQS), pp. 16–30 (2014)

    Google Scholar 

  33. ETAPS: Competition on software verification (SV-COMP). ETAPS European Joint Conference on Theory & Practice of Software - TACAS 2017 (2017). https://sv-comp.sosy-lab.org/2017/

  34. Foreman, L.M., Zweben, S.H.: A study of the effectiveness of control and data flow testing strategies. J. Syst. Softw. 21(3), 215–228 (1993)

    Article  Google Scholar 

  35. Frankl, P.G., Weiss, S.N.: An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Trans. Softw. Eng. 19(8), 774–787 (1993)

    Article  Google Scholar 

  36. Frankl, P.G., Iakounenko, O.: Further empirical studies of test effectiveness. In: SIGSOFT 1998, Proceedings of the ACM SIGSOFT International Symposium on Foundations of Software Engineering, Lake Buena Vista, Florida, USA, 3–5 November 1998, pp. 153–162 (1998)

    Google Scholar 

  37. Fraser, G., Wotawa, F., Ammann, P.: Testing with model checkers: a survey. Softw. Test. Verification Reliab. 19(3), 215–261 (2009)

    Article  Google Scholar 

  38. Ghiduk, A.S.: A new software data-flow testing approach via ant colony algorithms. Univ. J. Comput. Sci. Eng. Technol. 1(1), 64–72 (2010)

    Google Scholar 

  39. Ghiduk, A.S., Harrold, M.J., Girgis, M.R.: Using genetic algorithms to aid test-data generation for data-flow coverage. In: APSEC, pp. 41–48 (2007)

    Google Scholar 

  40. Girgis, M.R.: Using symbolic execution and data flow criteria to aid test data selection. Softw. Test. Verif. Reliab. 3(2), 101–112 (1993)

    Article  Google Scholar 

  41. Girgis, M.R.: Automatic test data generation for data flow testing using a genetic algorithm. J. UCS 11(6), 898–915 (2005)

    Google Scholar 

  42. Girgis, M.R., Ghiduk, A.S., Abd-elkawy, E.H.: Automatic generation of data flow test paths using a genetic algorithm. Int. J. Comput. Appl. 89(12), 29–36 (2014)

    Google Scholar 

  43. Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. In: Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 213–223. ACM, New York (2005)

    Google Scholar 

  44. Goldberg, A., Wang, T., Zimmerman, D.: Applications of feasible path analysis to program testing. In: Proceedings of the 1994 International Symposium on Software Testing and Analysis, ISSTA 1994, Seattle, WA, USA, 17–19 August 1994, pp. 80–94 (1994)

    Google Scholar 

  45. Harman, M., Kim, S.G., Lakhotia, K., McMinn, P., Yoo, S.: Optimizing for the number of tests generated in search based test data generation with an application to the oracle cost problem. In: Third International Conference on Software Testing, Verification and Validation, ICST 2010, Paris, France, 7–9 April 2010, Workshops Proceedings, pp. 182–191 (2010)

    Google Scholar 

  46. Harrold, M.J., Rothermel, G.: Performing data flow testing on classes. In: SIGSOFT FSE, pp. 154–163 (1994)

    Google Scholar 

  47. Harrold, M.J., Soffa, M.L.: Efficient computation of interprocedural definition-use chains. ACM Trans. Program. Lang. Syst. 16(2), 175–204 (1994)

    Article  Google Scholar 

  48. Hassan, M.M., Andrews, J.H.: Comparing multi-point stride coverage and dataflow coverage. In: 35th International Conference on Software Engineering, ICSE 2013, San Francisco, CA, USA, 18–26 May 2013, pp. 172–181 (2013)

    Google Scholar 

  49. Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Lazy abstraction. In: Conference Record of POPL 2002: The 29th SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Portland, OR, USA, 16–18 January 2002, pp. 58–70 (2002)

    Google Scholar 

  50. Hierons, R.M., et al.: Using formal specifications to support testing. ACM Comput. Surv. 41(2), 1–76 (2009)

    Article  MathSciNet  Google Scholar 

  51. Hong, H.S., Cha, S.D., Lee, I., Sokolsky, O., Ural, H.: Data flow testing as model checking. In: Proceedings of the 25th International Conference on Software Engineering, 3–10 May 2003, Portland, Oregon, USA, pp. 232–243 (2003)

    Google Scholar 

  52. Horgan, J.R., London, S.: ATAC: a data flow coverage testing tool for C. In: Proceedings of Symposium on Assessment of Quality Software Development Tools, pp. 2–10 (1992)

    Google Scholar 

  53. Horgan, J.R., London, S.: Data flow coverage and the C language. In: Proceedings of the Symposium on Testing, Analysis, and Verification, pp. 87–97. TAV4, ACM, New York (1991)

    Google Scholar 

  54. Hutchins, M., Foster, H., Goradia, T., Ostrand, T.J.: Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria. In: ICSE, pp. 191–200 (1994)

    Google Scholar 

  55. Jamrozik, K., Fraser, G., Tillmann, N., de Halleux, J.: Augmented dynamic symbolic execution. In: IEEE/ACM International Conference on Automated Software Engineering, pp. 254–257 (2012)

    Google Scholar 

  56. Jhala, R., Majumdar, R.: Software model checking. ACM Comput. Surv. (CSUR) 41(4), 21 (2009)

    Article  MATH  Google Scholar 

  57. Khamis, A., Bahgat, R., Abdelaziz, R.: Automatic test data generation using data flow information. Dogus Univ. J. 2, 140–153 (2011)

    Article  Google Scholar 

  58. King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  59. Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Formal Asp. Comput. 27(3), 573–609 (2015). https://doi.org/10.1007/s00165-014-0326-7

    Article  MathSciNet  Google Scholar 

  60. Lakhotia, K., McMinn, P., Harman, M.: Automated test data generation for coverage: haven’t we solved this problem yet? In: Proceedings of the 2009 Testing: Academic and Industrial Conference - Practice and Research Techniques, pp. 95–104. IEEE Computer Society, Washington (2009)

    Google Scholar 

  61. Ma, K.-K., Yit Phang, K., Foster, J.S., Hicks, M.: Directed symbolic execution. In: Yahav, E. (ed.) SAS 2011. LNCS, vol. 6887, pp. 95–111. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23702-7_11

    Chapter  Google Scholar 

  62. Malevris, N., Yates, D.: The collateral coverage of data flow criteria when branch testing. Inf. Softw. Technol. 48(8), 676–686 (2006)

    Article  Google Scholar 

  63. Marcozzi, M., Bardin, S., Kosmatov, N., Papadakis, M., Prevosto, V., Correnson, L.: Time to clean your test objectives. In: 40th International Conference on Software Engineering, 27 May–3 June 2018, Gothenburg, Sweden (2018)

    Google Scholar 

  64. Marinescu, P.D., Cadar, C.: KATCH: high-coverage testing of software patches. In: Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, ESEC/FSE 2013, Saint Petersburg, Russian Federation, 18–26 August 2013, pp. 235–245 (2013)

    Google Scholar 

  65. Marré, M., Bertolino, A.: Unconstrained duas and their use in achieving all-uses coverage. In: Proceedings of the 1996 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 147–157. ISSTA 199. ACM, New York (1996)

    Google Scholar 

  66. Marré, M., Bertolino, A.: Using spanning sets for coverage testing. IEEE Trans. Softw. Eng. 29(11), 974–984 (2003)

    Article  Google Scholar 

  67. Mathur, A.P., Wong, W.E.: An empirical comparison of data flow and mutation-based test adequacy criteria. Softw. Test. Verif. Reliab. 4(1), 9–31 (1994)

    Article  Google Scholar 

  68. Merlo, E., Antoniol, G.: A static measure of a subset of intra-procedural data flow testing coverage based on node coverage. In: CASCON, p. 7 (1999)

    Google Scholar 

  69. Nayak, N., Mohapatra, D.P.: Automatic test data generation for data flow testing using particle swarm optimization. In: IC3 (2), pp. 1–12 (2010)

    Google Scholar 

  70. Necula, G.C., McPeak, S., Rahul, S.P., Weimer, W.: CIL: intermediate language and tools for analysis and transformation of C programs. In: Horspool, R.N. (ed.) CC 2002. LNCS, vol. 2304, pp. 213–228. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45937-5_16

    Chapter  Google Scholar 

  71. Offutt, A.J., Pan, J.: Automatically detecting equivalent mutants and infeasible paths. Softw. Test. Verif. Reliab. 7(3), 165–192 (1997)

    Article  Google Scholar 

  72. Pande, H.D., Landi, W.A., Ryder, B.G.: Interprocedural def-use associations for C systems with single level pointers. IEEE Trans. Softw. Eng. 20(5), 385–403 (1994)

    Article  MATH  Google Scholar 

  73. Pandita, R., Xie, T., Tillmann, N., de Halleux, J.: Guided test generation for coverage criteria. In: Proceedings of the 2010 IEEE International Conference on Software Maintenance, pp. 1–10. IEEE Computer Society, Washington (2010)

    Google Scholar 

  74. Peng, Y., Huang, Y., Su, T., Guo, J.: Modeling and verification of AUTOSAR OS and EMS application. In: Seventh International Symposium on Theoretical Aspects of Software Engineering, TASE 2013, 1–3 July 2013, Birmingham, UK, pp. 37–44 (2013)

    Google Scholar 

  75. Rapps, S., Weyuker, E.J.: Data flow analysis techniques for test data selection. In: Proceedings of the 6th International Conference on Software Engineering, ICSE 1982, pp. 272–278. IEEE Computer Society Press, Los Alamitos (1982)

    Google Scholar 

  76. Rapps, S., Weyuker, E.J.: Selecting software test data using data flow information. IEEE Trans. Software Eng. 11(4), 367–375 (1985)

    Article  MATH  Google Scholar 

  77. Santelices, R., Harrold, M.J.: Efficiently monitoring data-flow test coverage. In: Proceedings of the twenty-second IEEE/ACM International Conference on Automated Software Engineering, ASE 2007, pp. 343–352. ACM, New York (2007)

    Google Scholar 

  78. Santelices, R.A., Sinha, S., Harrold, M.J.: Subsumption of program entities for efficient coverage and monitoring. In: Third International Workshop on Software Quality Assurance, SOQUA 2006, Portland, Oregon, USA, 6 November 2006, pp. 2–5 (2006)

    Google Scholar 

  79. Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: Proceedings of the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT International Symposium on Foundations of software engineering, pp. 263–272. ACM, New York (2005)

    Google Scholar 

  80. Singla, S., Kumar, D., Rai, H.M., Singla, P.: A hybrid PSO approach to automate test data generation for data flow coverage with dominance concepts. J. Adv. Sci. Technol. 37, 15–26 (2011)

    Google Scholar 

  81. Singla, S., Singla, P., Rai, H.M.: An automatic test data generation for data flow coverage using soft computing approach. IJRRCS 2(2), 265–270 (2011)

    Google Scholar 

  82. SIR Project: Software-artifact infrastructure repository. NC State University. http://sir.unl.edu/php/previewfiles.php. Accessed July 2016

  83. Su, T.: A bibliography of papers and tools on data flow testing. GitHub (2017). https://tingsu.github.io/files/dftbib.html

  84. Su, T., Fu, Z., Pu, G., He, J., Su, Z.: Combining symbolic execution and model checking for data flow testing. In: 37th IEEE/ACM International Conference on Software Engineering, ICSE 2015, Florence, Italy, 16–24 May 2015, vol. 1, pp. 654–665 (2015)

    Google Scholar 

  85. Su, T., et al.: Automated coverage-driven test data generation using dynamic symbolic execution. In: Eighth International Conference on Software Security and Reliability, SERE 2014, San Francisco, California, USA, 30 June–2 July 2014, pp. 98–107 (2014)

    Google Scholar 

  86. Su, T., et al.: A survey on data-flow testing. ACM Comput. Surv. 50(1), 5:1–5:35 (2017)

    Google Scholar 

  87. Su, T., Zhang, C., Yan, Y., Su, Z.: Towards efficient data-flow test data generation. GitHub (2019). https://tingsu.github.io/files/hybrid_dft.html

  88. Tillmann, N., de Halleux, J.: Pex–white box test generation for .NET. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79124-9_10

    Chapter  Google Scholar 

  89. Vivanti, M., Mis, A., Gorla, A., Fraser, G.: Search-based data-flow test generation. In: IEEE 24th International Symposium on Software Reliability Engineering, ISSRE 2013, Pasadena, CA, USA, 4–7 November 2013, pp. 370–379 (2013)

    Google Scholar 

  90. Wang, H., Liu, T., Guan, X., Shen, C., Zheng, Q., Yang, Z.: Dependence guided symbolic execution. IEEE Trans. Software Eng. 43(3), 252–271 (2017)

    Article  Google Scholar 

  91. Wang, Z., Yu, X., Sun, T., Pu, G., Ding, Z., Hu, J.: Test data generation for derived types in C program. In: TASE 2009, Third IEEE International Symposium on Theoretical Aspects of Software Engineering, 29–31 July 2009, Tianjin, China, pp. 155–162 (2009)

    Google Scholar 

  92. Weyuker, E.J.: The complexity of data flow criteria for test data selection. Inf. Process. Lett. 19(2), 103–109 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  93. Weyuker, E.J.: More experience with data flow testing. IEEE Trans. Software Eng. 19(9), 912–919 (1993)

    Article  MathSciNet  Google Scholar 

  94. Wong, W.E., Mathur, A.P.: Fault detection effectiveness of mutation and data flow testing. Software Qual. J. 4(1), 69–83 (1995)

    Article  Google Scholar 

  95. Xie, T., Tillmann, N., de Halleux, J., Schulte, W.: Fitness-guided path exploration in dynamic symbolic execution. In: Proceedings of the 2009 IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), pp. 359–368 (2009)

    Google Scholar 

  96. Yang, Q., Li, J.J., Weiss, D.M.: A survey of coverage-based testing tools. Comput. J. 52(5), 589–597 (2009)

    Article  Google Scholar 

  97. Zamfir, C., Candea, G.: Execution synthesis: a technique for automated software debugging. In: European Conference on Computer Systems, Proceedings of the 5th European Conference on Computer Systems, EuroSys 2010, Paris, France, 13–16 April 2010, pp. 321–334 (2010)

    Google Scholar 

  98. Zhang, C., et al.: SmartUnit: empirical evaluations for automated unit testing of embedded software in industry. In: 40th IEEE/ACM International Conference on Software Engineering, Software Engineering in Practice Track, ICSE 2018, 27 May–3 June 2018, Gothenburg, Sweden (2018)

    Google Scholar 

  99. Zhang, L., Xie, T., Zhang, L., Tillmann, N., de Halleux, J., Mei, H.: Test generation via dynamic symbolic execution for mutation testing. In: 26th IEEE International Conference on Software Maintenance (ICSM 2010), 12–18 September 2010, Timisoara, Romania, pp. 1–10 (2010)

    Google Scholar 

  100. Zhu, H., Hall, P.A.V., May, J.H.R.: Software unit test coverage and adequacy. ACM Comput. Surv. 29(4), 366–427 (1997)

    Article  Google Scholar 

Download references

Acknowledgements

This work is in honor of Jifeng He’s contribution to computer science, especially establishing the Unifying Theories of Programming (UTP). This work applies formal methods to support software testing, which was influenced by the work of Jifeng He. Ting Su, the lead author of this work, sincerely appreciate the academic guidance from his PhD supervisor Jifeng He.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ting Su .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Su, T. et al. (2023). Towards Efficient Data-Flow Test Data Generation. In: Bowen, J.P., Li, Q., Xu, Q. (eds) Theories of Programming and Formal Methods. Lecture Notes in Computer Science, vol 14080. Springer, Cham. https://doi.org/10.1007/978-3-031-40436-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40436-8_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40435-1

  • Online ISBN: 978-3-031-40436-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics