Acta Informatica

, Volume 18, Issue 1, pp 31–45 | Cite as

Two notions of correctness and their relation to testing

  • Timothy A. Budd
  • Dana Angluin
Article

Summary

We consider two interpretations for what it means for test data to demonstrate correctness. For each interpretation, we examine under what conditions data sufficient to demonstrate correctness exists, and whether it can be automatically detected and/or generated. We establish the relation between these questions and the problem of deciding equivalence of two programs.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Angluin, D.: On the complexity of minimum inference of regular sets. Information and Control39(3): 337–350 (1978)Google Scholar
  2. 2.
    Baldwin, D., Sayward, F.: Heuristics for determining equivalence of program mutations. Technical Report Number 161, Yale University 1979Google Scholar
  3. 3.
    Barzdin, J.M., Bicevskis, J.J., Kalninsh, A.A.: Construction of complete sample system for correctness testing. In: Lecture Notes in Computer Science, Vol. 32, pp. 1–12. Berlin: Springer-Verlag 1975Google Scholar
  4. 4.
    Biermann, A.W., Krishnaswamy, R.: Constructing programs from example computations. IEEE Transactions on Software Engineering SE-2(3): 141–153 (1976)Google Scholar
  5. 5.
    Blum, M.: On effective procedures for speeding up algorithms. Journal of the ACM18(2): 290–305 (1971)Google Scholar
  6. 6.
    Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Information and Control28(2): 125–155 (1975)Google Scholar
  7. 7.
    Brooks, M.: Automatic generation of test data for recursive programs having simple errors. PhD Thesis, Stanford University 1980Google Scholar
  8. 8.
    Budd, T.A., Lipton, R.J., DeMillo, R.A., Sayward, F.G.: Mutation analysis. Technical Report Number 155, Yale University 1979Google Scholar
  9. 9.
    Budd, T.A.: Mutation analysis of program test data. PhD Thesis, Yale University 1980Google Scholar
  10. 10.
    Chow, T.S.: Testing software design modeled by finite-state machines. IEEE Transactions on Software Engineering SE-4(3): 178–187 (1978)Google Scholar
  11. 11.
    Clarke, L.A.: A system to generate test data and symbolically execute programs. IEEE Transactions on Software Engineering SE-2(3): 215–222 (1976)Google Scholar
  12. 12.
    DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on test data selection: Help for the practicing programmer. Computer11(4): 34–43 (1978)Google Scholar
  13. 13.
    DeMillo, R.A., Lipton, R.J.: A probabilistic remark on algebraic program testing. Information Processing Letters7(4): 193–195 (1978)Google Scholar
  14. 14.
    Foster, K.A.: Error sensitive test cases. IEEE Transactions on Software Engineering SE-6(3): 258–264 (1980)Google Scholar
  15. 15.
    Geller, M.: Test data as an aid in proving program correctness. Communications of the ACM21(5): 368–375 (1978)Google Scholar
  16. 16.
    Goodenough, J.B., Gerhart, S.L.: Towards a theory of test data selection. IEEE Transactions on Software Engineering SE-1(2):156–173 (1975)Google Scholar
  17. 17.
    Goodenough, J.B.: A survey of program testing issues. pp. 316–340. In: Research Directions in Software Technology (P. Wegner, ed.), pp. 316–340. Cambridge, MA: MIT Press, 1979Google Scholar
  18. 18.
    Hamlet, R.: Testing programs with the aid of a compiler. IEEE Transactions on Software Engineering SE-3(4): 279–290 (1977)Google Scholar
  19. 19.
    Hamlet, R.: Testing programs with finite sets of data. The Computer Journal20(3): 232–237 (1977)Google Scholar
  20. 20.
    Hamlet, R.: Test reliability and software maintenance. In: Proceeding of the Compsac conference, pp. 315–320. IEEE 1978Google Scholar
  21. 21.
    Hamlet, R.: Critique of reliability theory. In: Digest of the Workshop on Software Testing and Test Documentation, pp. 57–69. Fort Lauderdale 1978Google Scholar
  22. 22.
    Hantler, S.L., King, J.C.: An introduction to proving the correctness of programs. ACM Computer Surveys8(3): 331–353 (1976)Google Scholar
  23. 23.
    Hopcroft, J.E., Ullman, J.D.: Formal languages and their relation to automata. New York: Addison-Wesley 1969Google Scholar
  24. 24.
    Howden, W.E.: Reliability of the path analysis testing strategy. IEEE Transactions on Software Engineering SE-2(3): 208–214 (1976)Google Scholar
  25. 25.
    Howden, W.E.: Lindenmayer grammers and symbolic testing. Information Processing L etters7(1): 36–39 (1978)Google Scholar
  26. 26.
    Howden, W.E.: Algebraic program testing. Acta Informat.10(1): 53–66 (1978)Google Scholar
  27. 27.
    Howden, W.E.: Functional program testing. IEEE Transactions on Software Engineering SE-6(2): 162–169 (1980)Google Scholar
  28. 28.
    Kernighan, B.W., Plauger, P.J.: Software tools. New York: Addison-Wesley 1976Google Scholar
  29. 29.
    Miller, E.F., Melton, R.A.: Automated generation of testcase datasets. In: Proceeding 1975 International Conference on Reliable Software, pp. 51–58. Los Angeles 1975Google Scholar
  30. 30.
    Ostrand, T.J., Weyuker, E.J.: Remarks on the theory of test data selection. In: Digest of the Workshop on Software Testing and Test Documentation, pp. 1–18. Fort Lauderdale 1978Google Scholar
  31. 31.
    Rogers, H. Jr.: Theory of recursive functions and effective computability. New York: McGraw-Hill 1967Google Scholar
  32. 32.
    Rowland, J.H., Davis, P.J.: On the selection of test data for recursive mathematical sub-routines. Siam Journal on Computing10(1): 59–72 (1981)Google Scholar
  33. 33.
    White, L.J., Cohen, E.I.: A domain strategy for computer program testing. IEEE Transactions on Software Engineering SE-6(3): 247–257 (1980)Google Scholar

Copyright information

© Springer-Verlag 1982

Authors and Affiliations

  • Timothy A. Budd
    • 1
  • Dana Angluin
    • 1
    • 2
  1. 1.Department of Computer ScienceThe University of ArizonaTucsonUSA
  2. 2.Department of Computer ScienceYale UniversityNew Haven

Personalised recommendations