Advertisement

We Need a Testability Transformation Semantics

  • Mark HarmanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10886)

Abstract

This paper (This paper is a brief outline of some of the content of the keynote by the author at the \(16^{th}\) International Conference on Software Engineering and Formal Methods (SEFM 2018) in Toulouse, France; 27th–29th June 2018.) briefly reviews Testability Transformation, its formal definition, and the open problem of constructing a set of formal test adequacy semantics to underpin the current practice of deploying transformations to help testing and verification activities.

Notes

Acknowledgements

Many thanks to Patrick Cousot, Paul Marinescu, Peter O’Hearn, Tony Hoare, Mike Papadakis, Shin Yoo, and Jie Zhang for comments on earlier drafts. Thanks also to the Facebook Developer Infrastructure leadership for their support and to the European Research Council for part-funding my scientific work through the ERC Advanced Fellowship scheme.

References

  1. 1.
    Apt, K.R.: Ten years of Hoare’s logic: a survey - part I. ACM Trans. Prog. Lang. Syst. 3(4), 431–483 (1981)CrossRefGoogle Scholar
  2. 2.
    Ball, T., Bounimova, E., Cook, B., Levin, V., Lichtenberg, J., McGarvey, C., Ondrusek, B., Rajamani, S.K., Ustuner, A.: Thorough static analysis of device drivers. In: Proceedings of the First European Systems Conference (EuroSys 2006), pp. 73–85. Leuven, Belgium, April 2006Google Scholar
  3. 3.
    Bardin, S., Delahaye, M., David, R., Kosmatov, N., Papadakis, M., Traon, Y.L., Marion, J.Y.: Sound and quasi-complete detection of infeasible test requirements. In: International Conference on Software Testing, Verification and Validation (ICST 2015), pp. 1–10. IEEE Computer Society (2015)Google Scholar
  4. 4.
    Baresel, A., Binkley, D., Harman, M., Korel, B.: Evolutionary testing in the presence of loop-assigned flags: a testability transformation approach. In: International Symposium on Software Testing and Analysis (ISSTA 2004), pp. 108–118. Omni Parker House Hotel, Boston, Massachusetts, July 2004. Appears in Software Engineering Notes 29(4)Google Scholar
  5. 5.
    Baresel, A., Sthamer, H.: Evolutionary testing of flag conditions. In: Cantú-Paz, E., Foster, J.A., Deb, K., Davis, L.D., Roy, R., O’Reilly, U.-M., Beyer, H.-G., Standish, R., Kendall, G., Wilson, S., Harman, M., Wegener, J., Dasgupta, D., Potter, M.A., Schultz, A.C., Dowsland, K.A., Jonoska, N., Miller, J. (eds.) GECCO 2003, Part II. LNCS, vol. 2724, pp. 2442–2454. Springer, Heidelberg (2003).  https://doi.org/10.1007/3-540-45110-2_148CrossRefzbMATHGoogle Scholar
  6. 6.
    Barr, E.T., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The Oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41(5), 507–525 (2015)CrossRefGoogle Scholar
  7. 7.
    Barraclough, R., Binkley, D., Danicic, S., Harman, M., Hierons, R., Kiss, A., Laurence, M.: A trajectory-based strict semantics for program slicing. Theor. Comput. Sci. 411(11–13), 1372–1386 (2010)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Beizer, B.: Software Testing Techniques. Van Nostrand Reinhold, New York (1990)zbMATHGoogle Scholar
  9. 9.
    Bertolino, A.: Software testing research: achievements, challenges, dreams. In: Briand, L., Wolf, A. (eds.) Future of Software Engineering 2007. IEEE Computer Society Press, Los Alamitos (2007)Google Scholar
  10. 10.
    British Standards Institute: BS 7925–2 software component testing (1998)Google Scholar
  11. 11.
    Cadar, C.: Targeted program transformations for symbolic execution. In: 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE), pp. 906–909 (2015)Google Scholar
  12. 12.
    Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. Commun. ACM 56(2), 82–90 (2013)CrossRefGoogle Scholar
  13. 13.
    Cartwright, R., Felleisen, M.: The semantics of program dependence. In: ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 13–27 (1989)Google Scholar
  14. 14.
    Chekam, T.T., Papadakis, M., Traon, Y.L., Harman, M.: An empirical study on mutation, statement and branch coverage fault revelation that avoids the unreliable clean program assumption. In: Proceedings of the 39th International Conference on Software Engineering, ICSE 2017, Buenos Aires, Argentina, 20–28 May 2017, pp. 597–608 (2017)Google Scholar
  15. 15.
    Chen, T.Y., Feng, J., Tse, T.H.: Metamorphic testing of programs on partial differential equations: a case study. In: 26th Annual International Computer Software and Applications Conference (COMPSAC 2002), pp. 327–333. IEEE Computer Society (2002)Google Scholar
  16. 16.
    Cousot, P., Cousot, R.: Abstract interpretation frameworks. J. Logic Comput. 2(4), 511–547 (1992)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Cousot, P., Cousot, R.: Systematic design of program transformation frameworks by abstract interpretation. In: The 29th ACM Symposium on Principles of Programming Languages (POPL 2002), pp. 178–190, Portland, Oregon, 16–18 January 2002Google Scholar
  18. 18.
    Darlington, J., Burstall, R.M.: A transformation system for developing recursive programs. J. Assoc. Comput. Mach. 24(1), 44–67 (1977)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Goguen, J.A., Malcolm, G.: Algebraic Semantics of Imperative Programs. MIT Press, Cambridge (1996)zbMATHGoogle Scholar
  20. 20.
    Gong, D., Yao, X.: Testability transformation based on equivalence of target statements. Neural Comput. Appl. 21(8), 1871–1882 (2012)CrossRefGoogle Scholar
  21. 21.
    Guttag, J.: Abstract data types and the development of data structures. Commun. ACM 20(6), 396–404 (1977)CrossRefGoogle Scholar
  22. 22.
    Harman, M.: Open problems in testability transformation (keynote paper). In: 1st International Workshop on Search Based Testing (SBST 2008), Lillehammer, Norway (2008)Google Scholar
  23. 23.
    Harman, M., Baresel, A., Binkley, D., Hierons, R., Hu, L., Korel, B., McMinn, P., Roper, M.: Testability transformation – program transformation to improve testability. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds.) Formal Methods and Testing. LNCS, vol. 4949, pp. 320–344. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-78917-8_11CrossRefGoogle Scholar
  24. 24.
    Harman, M., Danicic, S.: Using program slicing to simplify testing. Softw. Test. Verification Reliab. 5(3), 143–162 (1995)CrossRefGoogle Scholar
  25. 25.
    Harman, M., Hu, L., Hierons, R.M., Wegener, J., Sthamer, H., Baresel, A., Roper, M.: Testability transformation. IEEE Trans. Softw. Eng. 30(1), 3–16 (2004)CrossRefGoogle Scholar
  26. 26.
    Harman, M., Jia, Y., Zhang, Y.: Achievements, open problems and challenges for search based software testing (keynote paper). In: 8th IEEE International Conference on Software Testing, Verification and Validation (ICST 2015), Graz, Austria, April 2015Google Scholar
  27. 27.
    Harman, M., Yao, X., Jia, Y.: A study of equivalent and stubborn mutation operators using human analysis of equivalence. In: 36th International Conference on Software Engineering (ICSE 2014), pp. 919–930, Hyderabad, India, June 2014Google Scholar
  28. 28.
    Hierons, R., Harman, M., Fox, C.: Branch-coverage testability transformation for unstructured programs. Comput. J. 48(4), 421–436 (2005)CrossRefGoogle Scholar
  29. 29.
    Hoare, C.A.R.: An axiomatic basis of computer programming. Commun. ACM 12, 576–580 (1969)CrossRefGoogle Scholar
  30. 30.
    Jia, Y., Harman, M.: Higher order mutation testing. J. Inf. Softw. Technol. 51(10), 1379–1393 (2009)CrossRefGoogle Scholar
  31. 31.
    Jia, Y., Harman, M.: An analysis and survey of the development of mutation testing. IEEE Trans. Softw. Eng. 37(5), 649–678 (2011)CrossRefGoogle Scholar
  32. 32.
    Just, R., Jalali, D., Inozemtseva, L., Ernst, M.D., Holmes, R., Fraser, G.: Are mutants a valid substitute for real faults in software testing? In: International Symposium on Foundations of Software Engineering (FSE), pp. 654–665 (2014)Google Scholar
  33. 33.
    Kalaji, A., Hierons, R.M., Swift, S.: A testability transformation approach for state-based programs. In: 1st International Symposium on Search Based Software Engineering (SSBSE 2009), pp. 85–88. IEEE, Windsor, May 2009Google Scholar
  34. 34.
    Korel, B., Harman, M., Chung, S., Apirukvorapinit, P., Gupta, R.: Data dependence based testability transformation in automated test generation. In: 16th International Symposium on Software Reliability Engineering (ISSRE 2005), pp. 245–254, Chicago, Illinios, USA, November 2005Google Scholar
  35. 35.
    Kurtz, B., Ammann, P., Delamaro, M.E., Offutt, J., Deng, L.: Mutant subsumption graphs. In: 10th Mutation Testing Workshop (Mutation 2014), Cleveland Ohio, USA, March 2014, to appearGoogle Scholar
  36. 36.
    Le Goues, C., Nguyen, T., Forrest, S., Weimer, W.: GenProg: a generic method for automatic software repair. IEEE Trans. Softw. Eng. 38(1), 54–72 (2012)CrossRefGoogle Scholar
  37. 37.
    Li, Y., Fraser, G.: Bytecode testability transformation. In: Cohen, M.B., Ó Cinnéide, M. (eds.) SSBSE 2011. LNCS, vol. 6956, pp. 237–251. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23716-4_21CrossRefGoogle Scholar
  38. 38.
    Madeyski, L., Orzeszyna, W., Torkar, R., Jozala, M.: Overcoming the equivalent mutant problem: a systematic literature review and a comparative experiment of second order mutation. IEEE Trans. Softw. Eng. 40(1), 23–42 (2014)CrossRefGoogle Scholar
  39. 39.
    McMinn, P.: Search-based failure discovery using testability transformations to generate pseudo-oracles. In: Rothlauf, F. (ed.) Genetic and Evolutionary Computation Conference (GECCO 2009), pp. 1689–1696. ACM, Montreal (2009)Google Scholar
  40. 40.
    McMinn, P., Binkley, D., Harman, M.: Empirical evaluation of a nesting testability transformation for evolutionary testing. ACM Trans. Softw. Eng. Methodol. 18(3) (2009). Article no. 11Google Scholar
  41. 41.
    Papadakis, M., Jia, Y., Harman, M., Traon, Y.L.: Trivial compiler equivalence: a large scale empirical study of a simple, fast and effective equivalent mutant detection technique. In: 37th International Conference on Software Engineering (ICSE 2015), pp. 936–946, Florence, Italy (2015)Google Scholar
  42. 42.
    Parsons-Selke, R.: A graph semantics for program dependence graphs. In: Sixteenth ACM Symposium on Principles of Programming Languages (POPL), Austin, TX, 11–13 January 1989, pp. 12–24 (1989)Google Scholar
  43. 43.
    Partsch, H.A.: The Specification and Transformation of Programs: A Formal Approach to Software Development. Springer, Heidelberg (1990)CrossRefGoogle Scholar
  44. 44.
    Plotkin, G.D.: The origins of structural operational semantics. J. Logic Algebraic Prog. 60, 3–15 (2004)MathSciNetCrossRefGoogle Scholar
  45. 45.
    Radio Technical Commission for Aeronautics: RTCA DO178-B Software considerations in airborne systems and equipment certification (1992)Google Scholar
  46. 46.
    Reps, T., Yang, W.: The semantics of program slicing. Technical report 777, University of Wisconsin (1988)Google Scholar
  47. 47.
    Schulte, E., Fry, Z.P., Fast, E., Weimer, W., Forrest, S.: Software mutational robustness. Genet. Program. Evolvable Mach. 15(3), 281–312 (2014)CrossRefGoogle Scholar
  48. 48.
    Woodward, M.R., Halewood, K.: From weak to strong, dead or alive? An analysis of some mutation testing issues. In: 2nd Workshop on Software Testing, Verification, and Analysis. Banff, Canada, July 1988Google Scholar
  49. 49.
    Yu, Y.T., Lau, M.F.: A comparison of MC/DC, MUMCUT and several other coverage criteria for logical decisions. J. Syst. Softw. 79(5), 577–590 (2006)CrossRefGoogle Scholar
  50. 50.
    Zhang, J., Hao, D., Zhang, L., Zhang, L.: To detect abnormal program behaviours via mutation deduction. In: Mutation Testing Workshop, Mutation 2018, to appearGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.FacebookLondonUK
  2. 2.University CollegeLondonUK

Personalised recommendations