Skip to main content

Testing Functional Black-Box Programs Without a Specification

  • Chapter
  • First Online:
Machine Learning for Dynamic Software Analysis: Potentials and Limits

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 11026))

Abstract

In this chapter we examine the problem of testing functional black-box programs that do not require sequential inputs. We specifically focus on the case where there is no existing specification from which to derive tests. Research into this problem dates back over three decades, and has produced a variety of techniques, all of which employ various types of data mining and machine learning algorithms to examine test executions and to inform the selection of new tests. Here we provide an overview of these techniques and examine their limitations and opportunities for future research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that the use of ‘functional’ in this chapter refers to the external behaviour of the program, not the programming paradigm used to implement it.

  2. 2.

    The idea of combining algebraic specification inference with test generation was proposed in the same year by Xie and Notkin [34], though this was not accompanied by an implementation.

  3. 3.

    This requirement for access to source code coverage can be waived if it is not available, in which case the coverage becomes entirely driven by the inferred model.

References

  1. Arcuri, A., Briand, L.: Adaptive random testing: an illusion of effectiveness? In: Proceedings of the 2011 International Symposium on Software Testing and Analysis, pp. 265–275. ACM (2011)

    Google Scholar 

  2. Barr, E.T., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The Oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41(5), 507–525 (2015)

    Article  Google Scholar 

  3. Bergadano, F., Gunetti, D.: An interactive system to learn functional logic programs. In: IJCAI, pp. 1044–1049 (1993)

    Google Scholar 

  4. Bergadano, F., Gunetti, D.: Testing by means of inductive program learning. ACM Trans. Softw. Eng. Methodol. (TOSEM) 5(2), 119–145 (1996)

    Article  Google Scholar 

  5. Bongard, J.C., Lipson, H.: Nonlinear system identification using coevolution of models and tests. IEEE Trans. Evol. Comput. 9(4), 361–384 (2005)

    Article  Google Scholar 

  6. Briand, L.C., Labiche, Y., Bawar, Z., Spido, N.T.: Using machine learning to refine category-partition test specifications and test suites. Inf. Softw. Technol. 51(11), 1551–1564 (2009)

    Article  Google Scholar 

  7. Budd, T.A., Angluin, D.: Two notions of correctness and their relation to testing. Acta Informatica 18(1), 31–45 (1982)

    Article  MathSciNet  Google Scholar 

  8. Chen, T.Y., Leung, H., Mak, I.K.: Adaptive random testing. In: Maher, M.J. (ed.) ASIAN 2004. LNCS, vol. 3321, pp. 320–329. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30502-6_23

    Chapter  Google Scholar 

  9. Cherniavsky, J., Smith, C.: A recursion theoretic approach to program testing. IEEE Trans. Softw. Eng. 13, 777–784 (1987)

    Article  MathSciNet  Google Scholar 

  10. Ciupa, I., Leitner, A., Oriol, M., Meyer, B.: ARTOO: adaptive random testing for object-oriented software. In: Proceedings of the 30th International Conference on Software Engineering, pp. 71–80. ACM (2008)

    Google Scholar 

  11. Feldt, R., Poulding, S., Clark, D., Yoo, S.: Test set diameter: quantifying the diversity of sets of test cases. In: 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 223–233. IEEE (2016)

    Google Scholar 

  12. Fraser, G., Walkinshaw, N.: Assessing and generating test sets in terms of behavioural adequacy. Softw. Test. Verif. Reliab. 25(8), 749–780 (2015)

    Article  Google Scholar 

  13. Goodenough, J.B., Gerhart, S.L.: Toward a theory of test data selection. IEEE Trans. Softw. Eng. 2, 156–173 (1975)

    Article  MathSciNet  Google Scholar 

  14. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: an update. SIGKDD Explor. Newsl. 11, 10–18 (2009). ISSN 1931–0145

    Google Scholar 

  15. Hamlet,R.: Random testing. In: Encyclopedia of Software Engineering (1994)

    Google Scholar 

  16. Henkel, J., Diwan, A.: Discovering algebraic specifications from Java classes. In: Cardelli, L. (ed.) ECOOP 2003. LNCS, vol. 2743, pp. 431–456. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45070-2_19

    Chapter  Google Scholar 

  17. King, R.D., Whelan, K.E., Jones, F.M., Reiser, P.G.K., Bryant, C.H., Muggleton, S.H., Kell, D.B., Oliver, S.G.: Functional genomic hypothesis generation and experimentation by a robot scientist. Nature 427(6971), 247–252 (2004)

    Article  Google Scholar 

  18. Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51(2), 181–207 (2003)

    Article  Google Scholar 

  19. Lavrac,N., Dzeroski, S.: Inductive logic programming. In: WLP, pp. 146–160. Springer, Heidelberg (1994)

    Google Scholar 

  20. Moore, E.F.: Gedanken-experiments on sequential machines. Automata Stud. 34, 129–153 (1956)

    Google Scholar 

  21. Ostrand, T.J., Balcer, M.J.: The category-partition method for specifying and generating fuctional tests. Commun. ACM 31(6), 676–686 (1988)

    Article  Google Scholar 

  22. Papadopoulos, P., Walkinshaw, N.: Black-box test generation from inferred models. In: Proceedings of the Fourth International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering, pp. 19–24. IEEE Press (2015)

    Google Scholar 

  23. Quinlan, J.R.: C4.5 Programs for Machine Learning. MK, San Mateo (1993)

    Google Scholar 

  24. Romanik, K., Vitter, J.S.: Using computational learning theory to analyze the testing complexity of program segments. In: Proceedings Seventeenth Annual International Computer Software and Applications Conference. COMPSAC 1993, pp. 367–373. IEEE (1993)

    Google Scholar 

  25. Romanik, K., Vitter, J.S.: Using Vapnik-Chervonenkis dimension to analyze the testing complexity of program segments. Inf. Comput. 128(2), 87–108 (1996)

    Article  MathSciNet  Google Scholar 

  26. Settles, B.: Active learning literature survey, vol. 52, no. 55–66, p. 11. University of Wisconsin, Madison (2010)

    Google Scholar 

  27. Summers, P.D.: A methodology for LISP program construction from examples. J. ACM (JACM) 24(1), 161–175 (1977)

    Article  MathSciNet  Google Scholar 

  28. Tillmann, N., Schulte, W.: Parameterized unit tests. In: ACM SIGSOFT Software Engineering Notes, vol. 30, pp. 253–262. ACM (2005)

    Google Scholar 

  29. Valiant, L.G.: A theory of the learnable. Commun. ACM 27(11), 1134–1142 (1984)

    Article  Google Scholar 

  30. Vapnik, V.N., Chervonenkis, A.Y.: On the uniform convergence of relative frequencies of events to their probabilities. In: Vovk, V., Papadopoulos, H., Gammerman, A. (eds.) Measures of Complexity, pp. 11–30. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21852-6_3

    Chapter  MATH  Google Scholar 

  31. Walkinshaw, N., Fraser, G.: Uncertainty-driven black-box test data generation. In: International Conference on Software Testing (ICST) (2017)

    Google Scholar 

  32. Weyuker, E.J.: Assessing test data adequacy through program inference. ACM Trans. Program. Lang. Syst. (TOPLAS) 5(4), 641–655 (1983)

    Article  Google Scholar 

  33. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)

    Article  Google Scholar 

  34. Xie, T., Notkin, D.: Mutually enhancing test generation and specification inference. In: Petrenko, A., Ulrich, A. (eds.) FATES 2003. LNCS, vol. 2931, pp. 60–69. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24617-6_5

    Chapter  Google Scholar 

  35. Zhu, H.: A formal interpretation of software testing as inductive inference. Softw. Test. Verif. Reliab. 6(1), 3–31 (1996)

    Article  Google Scholar 

  36. Zhu, H., Hall, P., May, J.: Inductive inference and software testing. Softw. Test. Verif. Reliab. 2(2), 69–81 (1992)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neil Walkinshaw .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Walkinshaw, N. (2018). Testing Functional Black-Box Programs Without a Specification. In: Bennaceur, A., Hähnle, R., Meinke, K. (eds) Machine Learning for Dynamic Software Analysis: Potentials and Limits. Lecture Notes in Computer Science(), vol 11026. Springer, Cham. https://doi.org/10.1007/978-3-319-96562-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96562-8_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-96561-1

  • Online ISBN: 978-3-319-96562-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics