Automated Unit Test Generation for Python

  • Stephan LukasczykEmail author
  • Florian Kroiß
  • Gordon Fraser
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12420)


Automated unit test generation is an established research field, and mature test generation tools exist for statically typed programming languages such as Java. It is, however, substantially more difficult to automatically generate supportive tests for dynamically typed programming languages such as Python, due to the lack of type information and the dynamic nature of the language. In this paper, we describe a foray into the problem of unit test generation for dynamically typed languages. We introduce Pynguin, an automated unit test generation framework for Python. Using Pynguin, we aim to empirically shed light on two central questions: (1) Do well-established search-based test generation methods, previously evaluated only on statically typed languages, generalise to dynamically typed languages? (2) What is the influence of incomplete type information and dynamic typing on the problem of automated test generation? Our experiments confirm that evolutionary algorithms can outperform random test generation also in the context of Python, and can even alleviate the problem of absent type information to some degree. However, our results demonstrate that dynamic typing nevertheless poses a fundamental issue for test generation, suggesting future work on integrating type inference.


Dynamic Typing Python Random Test Generation Whole Suite Test Generation 


  1. 1.
    Andrews, J.H., Menzies, T., Li, F.C.: Genetic algorithms for randomized unit testing. IEEE Trans. Software Eng. 37(1), 80–94 (2011)CrossRefGoogle Scholar
  2. 2.
    Arcuri, A.: It really does matter how you normalize the branch distance in search-based software testing. Softw. Test. Verif. Reliab. 23(2), 119–147 (2013)CrossRefGoogle Scholar
  3. 3.
    Artzi, S., Dolby, J., Jensen, S.H., Møller, A., Tip, F.: A framework for automated testing of JavaScript web applications. In: Proceedings of the ICSE, pp. 571–580. ACM (2011)Google Scholar
  4. 4.
    Baresi, L., Miraz, M.: Testful: automatic unit-test generation for java classes. In: Proceedings of the ICSE, vol. 2, pp. 281–284. ACM (2010)Google Scholar
  5. 5.
    Campos, J., Ge, Y., Albunian, N., Fraser, G., Eler, M., Arcuri, A.: An empirical evaluation of evolutionary algorithms for unit test suite generation. Inf. Softw. Technol. 104, 207–235 (2018)CrossRefGoogle Scholar
  6. 6.
    Csallner, C., Smaragdakis, Y.: JCrasher: an automatic robustness tester for java. Softw. Pract. Exp. 34(11), 1025–1050 (2004)CrossRefGoogle Scholar
  7. 7.
    Fraser, G., Arcuri, A.: Evosuite: automatic test suite generation for object-oriented software. In: Proceedings of the ESEC/FSE, pp. 416–419. ACM (2011)Google Scholar
  8. 8.
    Fraser, G., Arcuri, A.: The seed is strong: seeding strategies in search-based software testing. In: Proceedings of the ICST, pp. 121–130. IEEE Computer Society (2012)Google Scholar
  9. 9.
    Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Software Eng. 39(2), 276–291 (2013)CrossRefGoogle Scholar
  10. 10.
    Fraser, G., Arcuri, A.: Automated test generation for java generics. In: Winkler, D., Biffl, S., Bergsmann, J. (eds.) SWQD 2014. LNBIP, vol. 166, pp. 185–198. Springer, Cham (2014). Scholar
  11. 11.
    Levenshtein, V.I.: Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady 10, 707–710 (1966)MathSciNetGoogle Scholar
  12. 12.
    Li, G., Andreasen, E., Ghosh, I.: SymJS: automatic symbolic testing of JavaScript web applications. In: Proceedings of the FSE, pp. 449–459. ACM (2014)Google Scholar
  13. 13.
    Ma, L., Artho, C., Zhang, C., Sato, H., Gmeiner, J., Ramler, R.: GRT: program-analysis-guided random testing (T). In: Proceedings of the ASE, pp. 212–223. IEEE Computer Society (2015)Google Scholar
  14. 14.
    Mirshokraie, S., Mesbah, A., Pattabiraman, K.: JSEFT: automated JavaScript unit test generation. In: Proceedings of the ICST, pp. 1–10. IEEE Computer Society (2015)Google Scholar
  15. 15.
    Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: Proceedings of the ICSE, pp. 75–84. IEEE Computer Society (2007)Google Scholar
  16. 16.
    Panichella, A., Kifetew, F.M., Tonella, P.: Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Software Eng. 44(2), 122–158 (2018)CrossRefGoogle Scholar
  17. 17.
    Sakti, A., Pesant, G., Guéhéneuc, Y.G.: Instance generator and problem representation to improve object oriented code coverage. IEEE Trans. Softw. Eng. 41(3), 294–313 (2014)CrossRefGoogle Scholar
  18. 18.
    Selakovic, M., Pradel, M., Karim, R., Tip, F.: Test generation for higher-order functions in dynamic languages. Proc. ACM Prog. Lang. 2(OOPSLA), 16:11–16:127 (2018)Google Scholar
  19. 19.
    Tonella, P.: Evolutionary testing of classes. In: Proceedings of the ISSTA, pp. 119–128. ACM (2004)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Stephan Lukasczyk
    • 1
    Email author
  • Florian Kroiß
    • 1
  • Gordon Fraser
    • 1
  1. 1.University of PassauPassauGermany

Personalised recommendations