Advertisement

Formal Aspects of Computing

, Volume 26, Issue 4, pp 795–823 | Cite as

Test-data generation for control coverage by proof

  • Ana Cavalcanti
  • Steve King
  • Colin O’Halloran
  • Jim WoodcockEmail author
Original Article

Abstract

Many tools can check if a test set provides control coverage; they are, however, of little or no help when coverage is not achieved and the test set needs to be completed. In this paper, we describe how a formal characterisation of a coverage criterion can be used to generate test data; we present a procedure based on traditional programming techniques like normalisation, and weakest precondition calculation. It is a basis for automation using an algebraic theorem prover. In the worst situation, if automation fails to produce a specific test, we are left with a specification of the compliant test sets. Many approaches to model-based testing rely on formal models of a system under test. Our work, on the other hand, is not concerned with the use of abstract models for testing, but with coverage based on the text of programs.

Keywords

Control coverage Semantics UTP Invariants 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abr96.
    Abrial J-R (1996) The B-book: assigning progams to meanings. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  2. AC05.
    Adams MM, Clayton PB (2005) Cost-effective formal verification for control systems. In: Lau K, Banach R (eds) ICFEM 2005: formal methods and software engineering, volume 3785 of Lecture Notes in Computer Science. Springer, Berlin, pp 465–479Google Scholar
  3. ACOS00.
    Arthan R, Caseley P, O’Halloran CM, Smith A (2000) ClawZ: control laws in Z. In: 3rd international conference on formal engineering methods. IEEE Press, pp 169–176Google Scholar
  4. Agr94.
    Agrawal H (1994) Dominators, super blocks, and program coverage. In: 21st ACM symposium on principles of programming languages, pp 25–34Google Scholar
  5. AOH03.
    Ammann P, Offutt J, Huang H (2003) Coverage criteria for logical expressions. 14th international symposium on software reliability engineering, pp 99–107Google Scholar
  6. Bar95.
    Barrett G (1995) Model checking in practice: the T9000 virtual channel processor. IEEE Trans Softw Eng 21(2): 69–78CrossRefGoogle Scholar
  7. BCM00.
    Burton S, Clark J, McDermid JA (2000) Testing, proof and automation: an integrated approach. In: 1st international workshop on automated program analysis, testing and verification (WAP-ATV 2000), pp 57–63Google Scholar
  8. BDS06.
    Boshernitsan M, Doong R, Savoia A (2006) From Daikon to Agitator: lessons and challenges in building a commercial tool for developer testing. International symposium on software testing and analysis. ACM Press, pp 169–180Google Scholar
  9. BGM91.
    Bernot G, Gaudel M-C, Marre B (1991) Software testing based on formal specifications:a theory and a tool. Softw Eng J 6(6): 387–405CrossRefGoogle Scholar
  10. BM96.
    Bertolino A, Marré M (1996) How many paths are needed for branch testing. J Syst Softw 35(2): 95–106CrossRefGoogle Scholar
  11. Bul.
    Bullseye Testing Technology. C-Cover. http://www.bullseye.com
  12. BW12.
    Brucker AD, Wolff B (2012) On theorem prover-based testing. Formal Aspects ComputGoogle Scholar
  13. CHL12.
    Chang J-R, Huang C-Y, Li P-H (2012) An investigation of classification-based algorithms for modified condition/decision coverage criteria. In: 6th international conference on software security and reliability. IEEE, pp 127–136Google Scholar
  14. Chu87.
    Chusho T (1987) Test data selection and quality estimation based on the concept of essential branches for path testing. IEEE Trans Softw Eng 13(5): 509–517CrossRefGoogle Scholar
  15. CHW06.
    Cavalcanti ALC, Harwood W, Woodcock JCP (2006) Pointers and records in the unifying theories of programming. In: Dunne S, Stoddart B (eds) Unifying theories of programming, volume 4010 of Lecture Notes in Computer Science. Springer, Berlin, pp 200–216Google Scholar
  16. CKOW07.
    Cavalcanti ALC, King S, O’Halloran CM, Woodcock JCP (2007) A scientific investigation of MC/DC testing. Technical Report YCS-2007-411. University of York, Department of Computer ScienceGoogle Scholar
  17. CM94.
    Chilenski JJ, Miller SP (1994) Applicability of modified condition/decision coverage to software testing. Softw Eng J, 193–200Google Scholar
  18. CO06.
    Clayton P, Halloran CO’ (2006) Using the compliance notation in industry. In: Cavalcanti ALC, Sampaio ACA, Woodcock JCP (eds) Refinement techniques in software engineering, volume 3167 of Lecture Notes in Computer Science. Springer, New York, pp 269–314Google Scholar
  19. CSE96.
    Callahan J, Schneider F, Easterbrook S (1996) Specification-based testing using model checking. In: SPIN workshopGoogle Scholar
  20. CW99.
    Cavalcanti ALC, Woodcock JCP (1999) ZRC—a refinement calculus for Z. Formal Aspects Comput 10(3): 267–289CrossRefGoogle Scholar
  21. DF93.
    Dick J, Faivre A (1993) Automating the generation and sequencing of test cases from model-based specifications. In: Formal methods Europe, volume 670 of Lecture Notes in Computer Science. Springer, New York, pp 268–284Google Scholar
  22. Dij76.
    Dijkstra EW (1976) A discipline of programming. Prentice-Hall, Englewood CliffsGoogle Scholar
  23. EPG+07.
    Ernst MD, Perkins JH, Guo PJ, McCamant S, Pacheco C, Tschantz MS, Xiao C (2007) The Daikon system for dynamic detection of likely invariants. Sci Comput ProgramGoogle Scholar
  24. FK96.
    Ferguson R, Korel B (1996) The chaining approach for software test data generation. ACM Trans Softw Eng Methodol 5(1): 63–86CrossRefGoogle Scholar
  25. GJ98.
    Gaudel M-C, James PJ (1998) Testing algebraic data types and processes: a unifying theory. Formal Aspects Comput 10(5–6): 436–451CrossRefzbMATHGoogle Scholar
  26. GN04.
    Grabowski J, Nielsen B (2004) Using model checking for reducing the cost of test generation. In: Grabowsk J, Nielsen B (eds) Formal approaches to software testing, volume 3395 of Lecture Notes in Computer Science. Springer, Berlin, pp 110–124Google Scholar
  27. GS93.
    Gupta R, Soffa ML (1993) Employing static information in the generation of test cases. Softw Test Verif Reliab 3(1): 29–48CrossRefGoogle Scholar
  28. HCL+03.
    Hong HS, Cha SD, Lee I, Sokolsky O, Ural H (2003) Data flow testing as model checking. In: 25th international conference on software engineering, pp 232–242Google Scholar
  29. HH98.
    Hoare CAR, He J (1998) Unifying theories of programming. Prentice-Hall, Englewood CliffsGoogle Scholar
  30. HLSU02.
    Hong HS, Lee I, Sokolsky O, Ural H (2002) A temporal logic based theory of test coverage and generation. In: International conference on tools and algorithms for construction and analysis of systemsGoogle Scholar
  31. HNS97.
    Helke S, Neustupny T, Santen T (1997) Automating test case generation from Z specifications with Isabelle. In: Bowen JP, Hinchey MG, Till D (eds) International conference of Z users, volume 1212 of Lecture Notes in Computer Science. Springer, New York, pp 52–71Google Scholar
  32. Hor02.
    Horwitz S (2002) Tool support for improving test coverage. In: Le Métayer D (ed) European symposium on programming, volume 2305 of Lecture Notes in Computer Science. Springer, New York, pp 162–177Google Scholar
  33. HSS01.
    Hierons RM, Sadeghipour S, Singh H (2001) Testing a system specified using statecharts and Z. Inf Softw Technol 43(2): 137–149CrossRefGoogle Scholar
  34. IPL.
  35. Jon92.
    Jones RB (1992) ICL ProofPower. BCS FACS FACTS Ser III 1(1): 10–13Google Scholar
  36. KB04.
    Kapoor K, Bowen JP (2004) A formal analysis of MCDC and RCDC test criteria. Softw Test Verif Reliab 15: 21–40CrossRefGoogle Scholar
  37. Kin75.
    King JC (1975) A new approach to program testing. In: International conference on reliable software. ACM, pp 228–233Google Scholar
  38. Kor90.
    Korel B (1990) Automated software test data generation. IEEE Trans Softw Eng 16(8): 870–879CrossRefGoogle Scholar
  39. Kor92.
    Korel B (1992) Dynamic method for software test data generation. J Softw Test Verif Reliab 4: 203–213CrossRefGoogle Scholar
  40. Kuh99.
    Kuhn DR (1999) Fault classes and error detection capability of specification-based testing. ACM Trans Softw Eng Methodol 8(4): 411–424CrossRefGoogle Scholar
  41. LCS02.
    Lira BO, Cavalcanti ALC, Sampaio ACA (2002) Automation of a normal form reduction strategy for object-oriented programming. In: Proceedings of the 5th Brazilian workshop on formal methods, pp 193–208Google Scholar
  42. LDR.
  43. LY96.
    Lee D, Yannakakis M (1996) Principles and methods of testing finite state machines—a survey. In: Proceedings of the IEEE, vol 84, pp 1090–1126Google Scholar
  44. Mah99.
    Maharaj S (1999) Towards a method of test case extraction from correctness proofs. In: 14th international workshop on algebraic development techniques, pp 45–46Google Scholar
  45. Mei00.
    Meisels I (2000) Software Manual for Windows Z/EVES Version 2.1. ORA Canada, TR-97-5505-04gGoogle Scholar
  46. MM98.
    Michael CC, McGraw G (1998) Automated software test data generation for complex programs. In: Automated software engineering, pp 136–146Google Scholar
  47. Mor94.
    Morgan CC (1994) Programming from specifications, 2nd edn. Prentice-Hall, Englewood-CliffsGoogle Scholar
  48. NWP02.
    Nipkow T, Wenzel M, Paulson LC (2002) Isabelle/HOL: a proof assistant for higher-order logic. Springer, BerlinGoogle Scholar
  49. OBY04.
    Okun V, Black P.E, Yesha Y (2004) Comparison of fault classes in specification-based testing. Inf Softw Technol 46(8): 525–533CrossRefGoogle Scholar
  50. ORS92.
    Owre S, Rushby JM, Shankar N (1992) PVS: a prototype verification system. In: Kapur D (ed) 11th international conference on automated deduction, volume 607 of Lecture Notes in Artificial Intelligence. Springer, New York, pp 748–752Google Scholar
  51. PXTdH10.
    Pandita R, Xie T, Tillmann N, de Halleux J (2010) Guided test generation for coverage criteria. In 26th IEEE international conference on software maintenance. IEEE Computer Society, pp 1–10Google Scholar
  52. Rat.
    Rational Software. PureCoverage. http://www.rational.com/products/pqc/index.jsp
  53. RCC11.
    RTCA/DO-178C/ED-12C (2011) Software considerations in airborne systems and equipment certificationGoogle Scholar
  54. RH03.
    Rayadurgam S, Heimdahl MPE (2003) Generating MC/DC adequate test sequences through model checking. In: Software engineering workshop, 28th annual NASA Goddard, pp 91–96Google Scholar
  55. SCHS10.
    Sherif A, Cavalcanti ALC, He J, Sampaio ACA (2010) A process algebraic framework for specification and validation of real-time systems. Formal Aspects Comput 22(2): 153–191CrossRefzbMATHGoogle Scholar
  56. SCS97.
    Singh H, Conrad M, Sadeghipour S (1997) Test case design based on Z and the classification-tree method. In: Hinchey MG, Liu S (eds) 1st international conference on formal engineering methods (ICFEM 1997). IEEE Computer Society Press, pp 81–90Google Scholar
  57. Sof.
  58. SU93.
    Schoot HVD, Ural H (1993) Data flow oriented test selection for LOTOS. Comput Netw ISDN Syst 27(7): 1111–1136CrossRefGoogle Scholar
  59. TACO04.
    Tudor N, Adams M, Clayton P, O’Halloran CM (2004) Auto-coding/auto-proving flight control software. In: IEEE digital avionics systems conferenceGoogle Scholar
  60. TS91.
    Tripathy P, Sarikaya B (1991) Test generation from LOTOS specifications. IEEE Trans Comput 40(4): 543–552CrossRefGoogle Scholar
  61. WD96.
    Woodcock JCP, Davies J (1996) Using Z—specification, refinement, and proof. Prentice-Hall, Englewood-CliffszbMATHGoogle Scholar
  62. WGS94.
    Weyuker E, Goradia T, Singh A (1994) Automatically generating test data from a boolean specification. IEEE Trans Softw Eng 20(5): 353–363CrossRefzbMATHGoogle Scholar
  63. XVM05.
    Xia S, Di Vito B, Munoz C (2005) Automated test generation for engineering applications. In: Automated software engineering, pp 283–286Google Scholar
  64. ZC12a.
    Zeyda F, Cavalcanti ALC (2012) Higher-order UTP in theories of object-orientation. In: 4th international symposium on unifying theories of programming, Lecture Notes in Computer ScienceGoogle Scholar
  65. ZC12b.
    Zeyda F, Cavalcanti ALC (2012) Mechanical reasoning about families of UTP theories. Sci Comput Program 77(4): 444–479CrossRefzbMATHGoogle Scholar

Copyright information

© British Computer Society 2013

Authors and Affiliations

  • Ana Cavalcanti
    • 1
  • Steve King
    • 1
  • Colin O’Halloran
    • 2
  • Jim Woodcock
    • 1
    Email author
  1. 1.Department of Computer ScienceUniversity of YorkYorkUK
  2. 2.Department of Computer ScienceUniversity of OxfordOxfordUK

Personalised recommendations