Assessing Test Adequacy for Black-Box Systems without Specifications

  • Neil Walkinshaw
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7019)

Abstract

Testing a black-box system without recourse to a specification is difficult, because there is no basis for estimating how many tests will be required, or to assess how complete a given test set is. Several researchers have noted that there is a duality between these testing problems and the problem of inductive inference (learning a model of a hidden system from a given set of examples). It is impossible to tell how many examples will be required to infer an accurate model, and there is no basis for telling how complete a given set of examples is. These issues have been addressed in the domain of inductive inference by developing statistical techniques, where the accuracy of an inferred model is subject to a tolerable degree of error. This paper explores the application of these techniques to assess test sets of black-box systems. It shows how they can be used to reason in a statistically justified manner about the number of tests required to fully exercise a system without a specification, and how to provide a valid adequacy measure for black-box test sets in an applied context.

Keywords

Finite State Machine Software Testing Inductive Inference Testing Context Probably Approximately Correct 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Angluin, D.: learning regular sets from queries and counterexamples. Information and Computation 75, 87–106 (1987)MathSciNetMATHCrossRefGoogle Scholar
  2. 2.
    Berg, T., Grinchtein, O., Jonsson, B., Leucker, M., Raffelt, H., Steffen, B.: On the correspondence between conformance testing and regular inference. In: Cerioli, M. (ed.) FASE 2005. LNCS, vol. 3442, pp. 175–189. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  3. 3.
    Bergadano, F., Gunetti, D.: Testing by means of inductive program learning. ACM Transactions on Software Engineering and Methodology 5(2), 119–145 (1996)CrossRefGoogle Scholar
  4. 4.
    Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.: Learnability and the vapnik-chervonenkis dimension. Journal of the ACM 36, 929–965 (1989)MathSciNetMATHCrossRefGoogle Scholar
  5. 5.
    Briand, L., Labiche, Y., Bawar, Z., Spido, N.: Using machine learning to refine category-partition test specifications and test suites. Information and Software Technology 51, 1551–1564 (2009)CrossRefGoogle Scholar
  6. 6.
    Cherniavsky, J., Smith, C.: A recursion theoretic approach to program testing. IEEE Transactions on Software Engineering 13 (1987)Google Scholar
  7. 7.
    Dupont, P., Miclet, L., Vidal, E.: What is the search space of the regular inference? (1994)Google Scholar
  8. 8.
    Ghani, K., Clark, J.: Strengthening inferred specifications using search based testing. In: International Conference on Software Testing Workshops (ICSTW). IEEE, Los Alamitos (2008)Google Scholar
  9. 9.
    Haussler, D.: Quantifying inductive bias: Ai learning algorithms and valiant’s learning framework. Artificial Intelligence 36, 177–221 (1988)MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    de la Higuera, C.: Grammatical Inference: Learning Automata and Grammars. Cambridge University Press, Cambridge (2010)MATHGoogle Scholar
  11. 11.
    Lang, K., Pearlmutter, B., Price, R.: Results of the Abbadingo One DFA Learning Competition and a New Evidence-Driven State Merging Algorithm. In: Honavar, V.G., Slutzki, G. (eds.) ICGI 1998. LNCS (LNAI), vol. 1433, pp. 1–12. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  12. 12.
    Last, M.: Data mining for software testing. In: The Data Mining and Knowledge Discovery Handbook, pp. 1239–1248. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  13. 13.
    Lee, D., Yannakakis, M.: Principles and Methods of Testing Finite State Machines - A Survey. Proceedings of the IEEE 84, 1090–1126 (1996)CrossRefGoogle Scholar
  14. 14.
    Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)MATHGoogle Scholar
  15. 15.
    Mitchell, T.: Generalization as search. Artificial Intelligence 18(2), 203–226 (1982)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Nagappan, N., Murphy, B., Basili, V.: The influence of organizational structure on software quality: an empirical case study. In: International Conference on Software Engineering (ICSE), pp. 521–530. ACM, New York (2008)Google Scholar
  17. 17.
    Perkins, J., Ernst, M.: Efficient incremental algorithms for dynamic detection of likely invariants. SIGSOFT Software Engineering Notes 29, 23–32 (2004)CrossRefGoogle Scholar
  18. 18.
    Poll, E., Schubert, A.: Verifying an implementation of ssh. In: Workshop on Issues of Theory of Security (WITS), pp. 164–177 (2007)Google Scholar
  19. 19.
    Raffelt, H., Steffen, B.: Learnlib: A library for automata learning and experimentation. In: Baresi, L., Heckel, R. (eds.) FASE 2006. LNCS, vol. 3922, pp. 377–380. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  20. 20.
    Romanik, K.: Approximate testing and its relationship to learning. Theoretical Computer Science 188(1-2), 175–194 (1997)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Romanik, K., Vitter, J.: Using Vapnik-Chervonenkis dimension to analyze the testing complexity of program segments. Information and Computation 128(2), 87–108 (1996)MathSciNetMATHCrossRefGoogle Scholar
  22. 22.
    Shahamiri, S., Kadira, W., Ibrahima, S., Hashim, S.: An automated framework for software test oracle. Information and Software Technology 53 (2011)Google Scholar
  23. 23.
    Shahbaz, M., Groz, R.: Inferring mealy machines. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 207–222. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  24. 24.
    Valiant, L.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)MATHCrossRefGoogle Scholar
  25. 25.
    Vapnik, V., Chervonenkis, A.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16(2), 264–280 (1971)MathSciNetMATHCrossRefGoogle Scholar
  26. 26.
    Walkinshaw, N.: The practical assessment of test sets with inductive inference techniques. In: Bottaci, L., Fraser, G. (eds.) TAIC PART 2010. LNCS, vol. 6303, pp. 165–172. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  27. 27.
    Walkinshaw, N., Bogdanov, K., Damas, C., Lambeau, B., Dupont, P.: A framework for the competitive evaluation of model inference techniques. In: Proceedings of the First International Workshop on Model Inference In Testing (MIIT), pp. 1–9. ACM, New York (2010)CrossRefGoogle Scholar
  28. 28.
    Walkinshaw, N., Bogdanov, K., Derrick, J., Paris, J.: Increasing functional coverage by inductive testing: A case study. In: Petrenko, A., Simão, A., Maldonado, J.C. (eds.) ICTSS 2010. LNCS, vol. 6435, pp. 126–141. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  29. 29.
    Walkinshaw, N., Derrick, J., Guo, Q.: Iterative refinement of reverse-engineered models by model-based testing. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 305–320. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  30. 30.
    Weyuker, E.: Assessing test data adequacy through program inference. ACM Transactions on Programming Languages and Systems 5(4), 641–655 (1983)MATHCrossRefGoogle Scholar
  31. 31.
    Zhu, H.: A formal interpretation of software testing as inductive inference. Software Testing, Verification and Reliability 6(1), 3–31 (1996)CrossRefGoogle Scholar
  32. 32.
    Zhu, H., Hall, P., May, J.: Inductive inference and software testing. Software Testing, Verification, and Reliability 2(2), 69–81 (1992)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2011

Authors and Affiliations

  • Neil Walkinshaw
    • 1
  1. 1.Department of Computer ScienceThe University of LeicesterUK

Personalised recommendations