Abstract
In this chapter we examine the problem of testing functional black-box programs that do not require sequential inputs. We specifically focus on the case where there is no existing specification from which to derive tests. Research into this problem dates back over three decades, and has produced a variety of techniques, all of which employ various types of data mining and machine learning algorithms to examine test executions and to inform the selection of new tests. Here we provide an overview of these techniques and examine their limitations and opportunities for future research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Note that the use of ‘functional’ in this chapter refers to the external behaviour of the program, not the programming paradigm used to implement it.
- 2.
The idea of combining algebraic specification inference with test generation was proposed in the same year by Xie and Notkin [34], though this was not accompanied by an implementation.
- 3.
This requirement for access to source code coverage can be waived if it is not available, in which case the coverage becomes entirely driven by the inferred model.
References
Arcuri, A., Briand, L.: Adaptive random testing: an illusion of effectiveness? In: Proceedings of the 2011 International Symposium on Software Testing and Analysis, pp. 265–275. ACM (2011)
Barr, E.T., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The Oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41(5), 507–525 (2015)
Bergadano, F., Gunetti, D.: An interactive system to learn functional logic programs. In: IJCAI, pp. 1044–1049 (1993)
Bergadano, F., Gunetti, D.: Testing by means of inductive program learning. ACM Trans. Softw. Eng. Methodol. (TOSEM) 5(2), 119–145 (1996)
Bongard, J.C., Lipson, H.: Nonlinear system identification using coevolution of models and tests. IEEE Trans. Evol. Comput. 9(4), 361–384 (2005)
Briand, L.C., Labiche, Y., Bawar, Z., Spido, N.T.: Using machine learning to refine category-partition test specifications and test suites. Inf. Softw. Technol. 51(11), 1551–1564 (2009)
Budd, T.A., Angluin, D.: Two notions of correctness and their relation to testing. Acta Informatica 18(1), 31–45 (1982)
Chen, T.Y., Leung, H., Mak, I.K.: Adaptive random testing. In: Maher, M.J. (ed.) ASIAN 2004. LNCS, vol. 3321, pp. 320–329. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30502-6_23
Cherniavsky, J., Smith, C.: A recursion theoretic approach to program testing. IEEE Trans. Softw. Eng. 13, 777–784 (1987)
Ciupa, I., Leitner, A., Oriol, M., Meyer, B.: ARTOO: adaptive random testing for object-oriented software. In: Proceedings of the 30th International Conference on Software Engineering, pp. 71–80. ACM (2008)
Feldt, R., Poulding, S., Clark, D., Yoo, S.: Test set diameter: quantifying the diversity of sets of test cases. In: 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 223–233. IEEE (2016)
Fraser, G., Walkinshaw, N.: Assessing and generating test sets in terms of behavioural adequacy. Softw. Test. Verif. Reliab. 25(8), 749–780 (2015)
Goodenough, J.B., Gerhart, S.L.: Toward a theory of test data selection. IEEE Trans. Softw. Eng. 2, 156–173 (1975)
Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: an update. SIGKDD Explor. Newsl. 11, 10–18 (2009). ISSN 1931–0145
Hamlet,R.: Random testing. In: Encyclopedia of Software Engineering (1994)
Henkel, J., Diwan, A.: Discovering algebraic specifications from Java classes. In: Cardelli, L. (ed.) ECOOP 2003. LNCS, vol. 2743, pp. 431–456. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45070-2_19
King, R.D., Whelan, K.E., Jones, F.M., Reiser, P.G.K., Bryant, C.H., Muggleton, S.H., Kell, D.B., Oliver, S.G.: Functional genomic hypothesis generation and experimentation by a robot scientist. Nature 427(6971), 247–252 (2004)
Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51(2), 181–207 (2003)
Lavrac,N., Dzeroski, S.: Inductive logic programming. In: WLP, pp. 146–160. Springer, Heidelberg (1994)
Moore, E.F.: Gedanken-experiments on sequential machines. Automata Stud. 34, 129–153 (1956)
Ostrand, T.J., Balcer, M.J.: The category-partition method for specifying and generating fuctional tests. Commun. ACM 31(6), 676–686 (1988)
Papadopoulos, P., Walkinshaw, N.: Black-box test generation from inferred models. In: Proceedings of the Fourth International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering, pp. 19–24. IEEE Press (2015)
Quinlan, J.R.: C4.5 Programs for Machine Learning. MK, San Mateo (1993)
Romanik, K., Vitter, J.S.: Using computational learning theory to analyze the testing complexity of program segments. In: Proceedings Seventeenth Annual International Computer Software and Applications Conference. COMPSAC 1993, pp. 367–373. IEEE (1993)
Romanik, K., Vitter, J.S.: Using Vapnik-Chervonenkis dimension to analyze the testing complexity of program segments. Inf. Comput. 128(2), 87–108 (1996)
Settles, B.: Active learning literature survey, vol. 52, no. 55–66, p. 11. University of Wisconsin, Madison (2010)
Summers, P.D.: A methodology for LISP program construction from examples. J. ACM (JACM) 24(1), 161–175 (1977)
Tillmann, N., Schulte, W.: Parameterized unit tests. In: ACM SIGSOFT Software Engineering Notes, vol. 30, pp. 253–262. ACM (2005)
Valiant, L.G.: A theory of the learnable. Commun. ACM 27(11), 1134–1142 (1984)
Vapnik, V.N., Chervonenkis, A.Y.: On the uniform convergence of relative frequencies of events to their probabilities. In: Vovk, V., Papadopoulos, H., Gammerman, A. (eds.) Measures of Complexity, pp. 11–30. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21852-6_3
Walkinshaw, N., Fraser, G.: Uncertainty-driven black-box test data generation. In: International Conference on Software Testing (ICST) (2017)
Weyuker, E.J.: Assessing test data adequacy through program inference. ACM Trans. Program. Lang. Syst. (TOPLAS) 5(4), 641–655 (1983)
Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)
Xie, T., Notkin, D.: Mutually enhancing test generation and specification inference. In: Petrenko, A., Ulrich, A. (eds.) FATES 2003. LNCS, vol. 2931, pp. 60–69. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24617-6_5
Zhu, H.: A formal interpretation of software testing as inductive inference. Softw. Test. Verif. Reliab. 6(1), 3–31 (1996)
Zhu, H., Hall, P., May, J.: Inductive inference and software testing. Softw. Test. Verif. Reliab. 2(2), 69–81 (1992)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Walkinshaw, N. (2018). Testing Functional Black-Box Programs Without a Specification. In: Bennaceur, A., Hähnle, R., Meinke, K. (eds) Machine Learning for Dynamic Software Analysis: Potentials and Limits. Lecture Notes in Computer Science(), vol 11026. Springer, Cham. https://doi.org/10.1007/978-3-319-96562-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-96562-8_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-96561-1
Online ISBN: 978-3-319-96562-8
eBook Packages: Computer ScienceComputer Science (R0)