Advertisement

The Practical Assessment of Test Sets with Inductive Inference Techniques

  • Neil Walkinshaw
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6303)

Abstract

Inductive inference is the process of hypothesizing a model from a set of examples. It can be considered to be the inverse of program testing, which is the process of generating a finite set of tests that are intended to fully exercise a software system. This relationship has been acknowledged for almost 30 years, and has led to the emergence of several induction-based techniques that aim either to generate suitable test sets or assess the adequacy of existing test sets. Unfortunately these techniques are usually deemed to be too impractical, because they are based on exact inference, requiring a vast set of examples or tests. In practice a test set can still be adequate if the inferred model contains minor errors. This paper shows how the Probably Approximately Correct (PAC) framework, a well-established approach in the field of inductive inference, can be applied to inductive testing techniques. This facilitates a more pragmatic assessment of these techniques by allowing for a degree of error. This evaluation framework gives rise to a challenge: To identify the best combination of testing and inference techniques that produce practical and (approximately) adequate test sets.

Keywords

Inductive Inference Inductive Testing Automaton Learning Inference Technique Practical Assessment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Weyuker, E.: Assessing test data adequacy through program inference. ACM Transactions on Programming Languages and Systems 5(4), 641–655 (1983)zbMATHCrossRefGoogle Scholar
  2. 2.
    Bergadano, F., Gunetti, D.: Testing by means of inductive program learning. ACM Transactions on Software Engineering and Methodology 5(2), 119–145 (1996)CrossRefGoogle Scholar
  3. 3.
    Zhu, H., Hall, P., May, J.: Inductive inference and software testing. Software Testing, Verification, and Reliability 2(2), 69–81 (1992)CrossRefGoogle Scholar
  4. 4.
    Zhu, H.: A formal interpretation of software testing as inductive inference. Software Testing, Verification and Reliability 6(1), 3–31 (1996)CrossRefGoogle Scholar
  5. 5.
    Harder, M., Mellen, J., Ernst, M.: Improving test suites via operational abstraction. In: Proceedings of the International Conference on Software Engineering (ICSE 2003), pp. 60–71 (2003)Google Scholar
  6. 6.
    Xie, T., Notkin, D.: Mutually enhancing test generation and specification inference. In: Petrenko, A., Ulrich, A. (eds.) FATES 2003. LNCS, vol. 2931, pp. 60–69. Springer, Heidelberg (2003)Google Scholar
  7. 7.
    Berg, T., Grinchtein, O., Jonsson, B., Leucker, M., Raffelt, H., Steffen, B.: On the correspondence between conformance testing and regular inference. In: Cerioli, M. (ed.) FASE 2005. LNCS, vol. 3442, pp. 175–189. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  8. 8.
    Raffelt, H., Steffen, B.: Learnlib: A library for automata learning and experimentation. In: Baresi, L., Heckel, R. (eds.) FASE 2006. LNCS, vol. 3922, pp. 377–380. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Bollig, B., Katoen, J., Kern, C., Leucker, M.: Smyle: A tool for synthesizing distributed models from scenarios by learning. In: van Breugel, F., Chechik, M. (eds.) CONCUR 2008. LNCS, vol. 5201, pp. 162–166. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Shahbaz, M., Groz, R.: Inferring mealy machines. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 207–222. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  11. 11.
    Walkinshaw, N., Derrick, J., Guo, Q.: Iterative refinement of reverse-engineered models by model-based testing. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 305–320. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  12. 12.
    Valiant, L.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)zbMATHCrossRefGoogle Scholar
  13. 13.
    Summers, P.: A methodology for LISP program construction from examples. Journal of the ACM 24(1) (January 1977)Google Scholar
  14. 14.
    Lee, D., Yannakakis, M.: Principles and Methods of Testing Finite State Machines - A Survey. Proceedings of the IEEE 84, 1090–1126 (1996)CrossRefGoogle Scholar
  15. 15.
    Gold, M.: Language Identification in the Limit. Information and Control 10, 447–474 (1967)zbMATHCrossRefGoogle Scholar
  16. 16.
    Angluin, D.: Learning Regular Sets from Queries and Counterexamples. Information and Computation 75, 87–106 (1987)zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Lang, K., Pearlmutter, B., Price, R.: Results of the Abbadingo One DFA Learning Competition and a New Evidence-Driven State Merging Algorithm. In: Honavar, V.G., Slutzki, G. (eds.) ICGI 1998. LNCS (LNAI), vol. 1433, pp. 1–12. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  18. 18.
    Gold, M.: Complexity of automaton identification from given data. Information and Control 37, 302–320 (1978)zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Angluin, D.: On the complexity of minimum inference of regular sets. Information and Control 39, 337–350 (1978)zbMATHCrossRefMathSciNetGoogle Scholar
  20. 20.
    Walkinshaw, N., Bogdanov, K., Holcombe, M., Salahuddin, S.: Improving Dynamic Software Analysis by Applying Grammar Inference Principles. Journal of Software Maintenance and Evolution: Research and Practice (2008)Google Scholar
  21. 21.
    Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)zbMATHGoogle Scholar
  22. 22.
    Walkinshaw, N., Bogdanov, K., Johnson, K.: Evaluation and Comparison of Inferred Regular Grammars. In: Clark, A., Coste, F., Miclet, L. (eds.) ICGI 2008. LNCS (LNAI), vol. 5278, pp. 252–265. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  23. 23.
    Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Information Processing and Management 45(4), 427–437 (2009)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Neil Walkinshaw
    • 1
  1. 1.Department of Computer ScienceThe University of SheffieldSheffieldUK

Personalised recommendations