Advertisement

Fast, Automatic, and Nearly Complete Structural Unit-Test Generation Combining Genetic Algorithms and Formal Methods

  • Eric LavillonnièreEmail author
  • David MentréEmail author
  • Denis CousineauEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11823)

Abstract

Software testing is a time consuming and error prone activity, mostly manual in most industries. One approach to increase productivity is to automatically generate tests. In this paper, we focus on automatic generation of structural unit tests of safety-critical embedded software. Our purpose is to make a tool that integrates seamlessly with existing test processes in industry. We use genetic algorithms and automatic stub generation to quickly and automatically produce test cases satisfying test objectives of a given coverage criteria, using only the software under test as input. Moreover, we combine those genetic algorithms with formal methods to determine unfeasible test objectives and help on the coverage of difficult test objectives. We implemented our approach in a tool and tested it on a real-world industrial project, demonstrating that our approach can reliably generate test cases when feasible or demonstrate they are unfeasible for 99% of the MC/DC test objectives in about half an hour for 82,000 lines of C code with integer data.

Keywords

Structural unit test Automatic stub generation Genetic algorithms Abstract Interpretation Concolic test generation Bounded Model Checking 

References

  1. 1.
    Aggarwal, R., Singh, N.: Search based structural test data generations: a survey/a current state of art. Int. J. Sci. Eng. Res. 8, 511–520 (2017)Google Scholar
  2. 2.
    Bardin, S., Chebaro, O., Delahaye, M., Kosmatov, N.: An all-in-one toolkit for automated white-box testing. In: Seidl, M., Tillmann, N. (eds.) TAP 2014. LNCS, vol. 8570, pp. 53–60. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-09099-3_4CrossRefGoogle Scholar
  3. 3.
    Bardin, S., et al.: Sound and quasi-complete detection of infeasible test requirements. In: International Conference on Software Testing, Verification and Validation (ICST), pp. 1–10. IEEE (2015)Google Scholar
  4. 4.
    Bardin, S., Kosmatov, N., Marre, B., Mentré, D., Williams, N.: Test case generation with PathCrawler/LTest: how to automate an industrial testing process. In: Margaria, T., Steffen, B. (eds.) ISoLA 2018. LNCS, vol. 11247, pp. 104–120. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-03427-6_12CrossRefGoogle Scholar
  5. 5.
    Botella, B., et al.: Automating structural testing of C programs: experience with PathCrawler. In: Proceedings of the 4th International Workshop on the Automation of Software Test, pp. 70–78. IEEE (2009)Google Scholar
  6. 6.
    Canet, G., Cuoq, P., Monate, B.: A value analysis for C programs. In: International Working Conference on Source Code Analysis and Manipulation (2009)Google Scholar
  7. 7.
    Clarke, E., Kroening, D., Lerda, F.: A tool for checking ANSI-C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24730-2_15CrossRefzbMATHGoogle Scholar
  8. 8.
    Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Symposium on Principles of Programming Languages, pp. 238–252 (1977)Google Scholar
  9. 9.
    Di Rosa, E., Giunchiglia, E., Narizzano, M., Palma, G., Puddu, A.: Automatic generation of high quality test sets via CBMC (2010)Google Scholar
  10. 10.
    Kelly, J.H., Dan, S.V., John, J.C., Leanna, K.R.: A practical tutorial on modified condition/decision coverage. Technical report, NASA Langley (2001)Google Scholar
  11. 11.
    Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Formal Aspects Comput. 27(3), 573–609 (2015)MathSciNetCrossRefGoogle Scholar
  12. 12.
    McMinn, P.: Search-based software test data generation: a survey. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004)CrossRefGoogle Scholar
  13. 13.
    Minj, J.: Feasible test case generation using search based technique. Int. J. Comput. Appl. 70(28), 51–54 (2013)Google Scholar
  14. 14.
    Williams, N., Marre, B., Mouy, P., Roger, M.: PathCrawler: automatic generation of path tests by combining static and dynamic analysis. In: Dal Cin, M., Kaâniche, M., Pataricza, A. (eds.) EDCC 2005. LNCS, vol. 3463, pp. 281–292. Springer, Heidelberg (2005).  https://doi.org/10.1007/11408901_21CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Mitsubishi Electric R&D Centre EuropeRennesFrance

Personalised recommendations