Abstract
A natural and often used strategy when testing software is to use input values at boundaries, i.e. where behavior is expected to change the most, an approach often called boundary value testing or analysis (BVA). Even though this has been a key testing idea for long it has been hard to clearly define and formalize. Consequently, it has also been hard to automate.
In this research note we propose one such formalization of BVA by, in a similar way as to how the derivative of a function is defined in mathematics, considering (software) program derivatives. Critical to our definition is the notion of distance between inputs and outputs which we can formalize and then quantify based on ideas from Information theory.
However, for our (black-box) approach to be practical one must search for test inputs with specific properties. Coupling it with search-based software engineering is thus required and we discuss how program derivatives can be used as and within fitness functions.
This brief note does not allow a deeper, empirical investigation but we use a simple illustrative example throughout to introduce the main ideas. By combining program derivatives with search, we thus propose a practical as well as theoretically interesting technique for automated boundary value (analysis and) testing.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Also called domain testing in several papers but less so in recent years.
- 2.
Subtraction also preserves directionality, however, in the following we focus purely on the distance (the absolute value) rather than on its directionality (which we argue is less clear a concept for arbitrary data types).
- 3.
We note that for search-based testing the PDQ, used on the right hand side of the PD definition, might be a more fruitful concept than the derivative itself since, for complex and high-dimensional data domains the closest value to another value can be ill-defined, and there can be several directions that are interesting to consider for sensitivity and rates of change (not only the one of the closest ‘neighbour’).
- 4.
Future work should investigate the many alternatives here and if they make any difference in practice.
- 5.
In an automated BVA tool we would instead have used search here.
References
Alshahwan, N., Harman, M.: Augmenting test suites effectiveness by increasing output diversity. In: 2012 34th International Conference on Software Engineering (ICSE), pp. 1345–1348. IEEE (2012)
Bath, G., McKay, J.: The Software Test Engineer’s Handbook: A Study Guide for the ISTQB Test Analyst and Technical Analyst Advanced Level Certificates, 2nd edn. Rocky Nook, Santa Barbara (2012)
Bennett, C.H., Gács, P., Li, M., Vitányi, P.M., Zurek, W.H.: Information distance. IEEE Trans. Inf. Theory 44(4), 1407–1423 (1998)
Cilibrasi, R., Vitányi, P., De Wolf, R.: Algorithmic clustering of music based on string compression. Comput. Music J. 28(4), 49–67 (2004)
Clarke, L.A., Hassell, J., Richardson, D.J.: A close look at domain testing. IEEE Trans. Softw. Eng. 4, 380–390 (1982)
Cruciani, E., Miranda, B., Verdecchia, R., Bertolino, A.: Scalable approaches for test suite reduction. In: 41st International Conference on Software Engineering (ICSE). IEEE (2019)
Feldt, R., Poulding, S.: Finding test data with specific properties via metaheuristic search. In: 2013 IEEE 24th International Symposium on Software Reliability Engineering (ISSRE), pp. 350–359. IEEE (2013)
Feldt, R., Poulding, S.: Broadening the search in search-based software testing: it need not be evolutionary. In: Proceedings of the Eighth International Workshop on Search-Based Software Testing, pp. 1–7. IEEE Press (2015)
Feldt, R., Poulding, S., Clark, D., Yoo, S.: Test set diameter: quantifying the diversity of sets of test cases. In: 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 223–233. IEEE (2016)
Feldt, R., Torkar, R., Gorschek, T., Afzal, W.: Searching for cognitively diverse tests: towards universal test diversity metrics. In: IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2008, pp. 178–186. IEEE (2008)
Glass, R.L.: Frequently forgotten fundamental facts about software engineering. IEEE Softw. 3, 110–112 (2001)
Hierons, R.M.: Avoiding coincidental correctness in boundary value analysis. ACM Trans. Softw. Eng. Methodol. (TOSEM) 15(3), 227–241 (2006)
Jeng, B., Forgács, I.: An automatic approach of domain test data generation. J. Syst. Softw. 49(1), 97–112 (1999)
Löscher, A., Sagonas, K.: Targeted property-based testing. In: Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 46–56. ACM (2017)
Marculescu, B., Feldt, R.: Finding a boundary between valid and invalid regions of the input space. arXiv preprint arXiv:1810.06720 (2018)
Might, M., Darais, D., Spiewak, D.: Parsing with derivatives: a functional pearl. In: ACM SIGPLAN Notices, vol. 46, pp. 189–195. ACM (2011)
Spillner, A., Linz, T., Schaefer, H.: Software Testing Foundations: A Study Guide for The Certified Tester Exam. Rocky Nook Inc., Santa Barbara (2014)
Vitányi, P.M., Balbach, F.J., Cilibrasi, R.L., Li, M.: Normalized information distance. In: Emmert-Streib, F., Dehmer, M. (eds.) Information Theory and Statistical Learning, pp. 45–82. Springer, Boston (2009). https://doi.org/10.1007/978-0-387-84816-7_3
Weyuker, E.J., Jeng, B.: Analyzing partition testing strategies. IEEE Trans. Softw. Eng. 7, 703–711 (1991)
White, L.J., Cohen, E.I.: A domain strategy for computer program testing. IEEE Trans. Softw. Eng. 3, 247–257 (1980)
Wikipedia: Difference quotient – Wikipedia, the free encyclopedia (2019). https://en.wikipedia.org/wiki/Difference_quotient. Accessed 20 June 2019
Zhao, R., Lyu, M.R., Min, Y.: Automatic string test data generation for detecting domain errors. Softw. Test. Verif. Reliab. 20(3), 209–236 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Feldt, R., Dobslaw, F. (2019). Towards Automated Boundary Value Testing with Program Derivatives and Search. In: Nejati, S., Gay, G. (eds) Search-Based Software Engineering. SSBSE 2019. Lecture Notes in Computer Science(), vol 11664. Springer, Cham. https://doi.org/10.1007/978-3-030-27455-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-27455-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27454-2
Online ISBN: 978-3-030-27455-9
eBook Packages: Computer ScienceComputer Science (R0)