Abstract
Automated unit test generation is an established research field, and mature test generation tools exist for statically typed programming languages such as Java. It is, however, substantially more difficult to automatically generate supportive tests for dynamically typed programming languages such as Python, due to the lack of type information and the dynamic nature of the language. In this paper, we describe a foray into the problem of unit test generation for dynamically typed languages. We introduce Pynguin, an automated unit test generation framework for Python. Using Pynguin, we aim to empirically shed light on two central questions: (1) Do well-established search-based test generation methods, previously evaluated only on statically typed languages, generalise to dynamically typed languages? (2) What is the influence of incomplete type information and dynamic typing on the problem of automated test generation? Our experiments confirm that evolutionary algorithms can outperform random test generation also in the context of Python, and can even alleviate the problem of absent type information to some degree. However, our results demonstrate that dynamic typing nevertheless poses a fundamental issue for test generation, suggesting future work on integrating type inference.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
https://spectrum.ieee.org/computing/software/the-top-programming-languages-2019, accessed 2020–07–25.
- 2.
https://python.org/dev/peps/pep-0484/, accessed 2020–07–25.
- 3.
https://github.com/se2p/pynguin, accessed 2020–07–27.
- 4.
https://pypi.org/project/pynguin/, accessed 2020–07–25.
- 5.
https://www.pytest.org, accessed 2020–07–25.
- 6.
https://github.com/AlDanial/cloc, accessed 2020–07–25.
- 7.
https://github.com/se2p/artifact-pynguin-ssbse2020, accessed 2020–07–27.
- 8.
https://github.com/laffra/auger, accessed 2020–07–25.
References
Andrews, J.H., Menzies, T., Li, F.C.: Genetic algorithms for randomized unit testing. IEEE Trans. Software Eng. 37(1), 80–94 (2011)
Arcuri, A.: It really does matter how you normalize the branch distance in search-based software testing. Softw. Test. Verif. Reliab. 23(2), 119–147 (2013)
Artzi, S., Dolby, J., Jensen, S.H., Møller, A., Tip, F.: A framework for automated testing of JavaScript web applications. In: Proceedings of the ICSE, pp. 571–580. ACM (2011)
Baresi, L., Miraz, M.: Testful: automatic unit-test generation for java classes. In: Proceedings of the ICSE, vol. 2, pp. 281–284. ACM (2010)
Campos, J., Ge, Y., Albunian, N., Fraser, G., Eler, M., Arcuri, A.: An empirical evaluation of evolutionary algorithms for unit test suite generation. Inf. Softw. Technol. 104, 207–235 (2018)
Csallner, C., Smaragdakis, Y.: JCrasher: an automatic robustness tester for java. Softw. Pract. Exp. 34(11), 1025–1050 (2004)
Fraser, G., Arcuri, A.: Evosuite: automatic test suite generation for object-oriented software. In: Proceedings of the ESEC/FSE, pp. 416–419. ACM (2011)
Fraser, G., Arcuri, A.: The seed is strong: seeding strategies in search-based software testing. In: Proceedings of the ICST, pp. 121–130. IEEE Computer Society (2012)
Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Software Eng. 39(2), 276–291 (2013)
Fraser, G., Arcuri, A.: Automated test generation for java generics. In: Winkler, D., Biffl, S., Bergsmann, J. (eds.) SWQD 2014. LNBIP, vol. 166, pp. 185–198. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-03602-1_12
Levenshtein, V.I.: Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady 10, 707–710 (1966)
Li, G., Andreasen, E., Ghosh, I.: SymJS: automatic symbolic testing of JavaScript web applications. In: Proceedings of the FSE, pp. 449–459. ACM (2014)
Ma, L., Artho, C., Zhang, C., Sato, H., Gmeiner, J., Ramler, R.: GRT: program-analysis-guided random testing (T). In: Proceedings of the ASE, pp. 212–223. IEEE Computer Society (2015)
Mirshokraie, S., Mesbah, A., Pattabiraman, K.: JSEFT: automated JavaScript unit test generation. In: Proceedings of the ICST, pp. 1–10. IEEE Computer Society (2015)
Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: Proceedings of the ICSE, pp. 75–84. IEEE Computer Society (2007)
Panichella, A., Kifetew, F.M., Tonella, P.: Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Software Eng. 44(2), 122–158 (2018)
Sakti, A., Pesant, G., Guéhéneuc, Y.G.: Instance generator and problem representation to improve object oriented code coverage. IEEE Trans. Softw. Eng. 41(3), 294–313 (2014)
Selakovic, M., Pradel, M., Karim, R., Tip, F.: Test generation for higher-order functions in dynamic languages. Proc. ACM Prog. Lang. 2(OOPSLA), 16:11–16:127 (2018)
Tonella, P.: Evolutionary testing of classes. In: Proceedings of the ISSTA, pp. 119–128. ACM (2004)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Lukasczyk, S., Kroiß, F., Fraser, G. (2020). Automated Unit Test Generation for Python. In: Aleti, A., Panichella, A. (eds) Search-Based Software Engineering. SSBSE 2020. Lecture Notes in Computer Science(), vol 12420. Springer, Cham. https://doi.org/10.1007/978-3-030-59762-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-59762-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59761-0
Online ISBN: 978-3-030-59762-7
eBook Packages: Computer ScienceComputer Science (R0)