High-Coverage Symbolic Patch Testing

  • Paul Dan Marinescu
  • Cristian Cadar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7385)

Abstract

Software patches are often poorly tested, with many of them containing faults that affect the correct operation of the software. In this paper, we propose an automatic technique based on symbolic execution, that aims to increase the quality of patches by providing developers with an automated mechanism for generating a set of comprehensive test cases covering all or most of the statements in a software patch.

Our preliminary evaluation of this technique has shown promising results on several real patches from the lighttpd web server.

Keywords

Test Suite Basic Block Symbolic Execution Path Regeneration Dead Code 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Babić, D., Martignoni, L., McCamant, S., Song, D.: Statically-directed dynamic automated test generation. In: Proc. of the International Symposium on Software Testing and Analysis, ISSTA 2011 (July 2011)Google Scholar
  2. 2.
    Beck, K.: Extreme Programming Explained: Embrace Change. Addison Wesley (1999)Google Scholar
  3. 3.
    Binkley, D.: Semantics guided regression test cost reduction. IEEE Transactions on Software Engineering (TSE) 23(8) (1997)Google Scholar
  4. 4.
    Brumley, D., Newsome, J., Song, D., Wang, H., Jha, S.: Towards automatic generation of vulnerability-based signatures. In: Proc. of the IEEE Symposium on Security and Privacy, IEEE S&P 2006 (May 2006)Google Scholar
  5. 5.
    Brumley, D., Wang, H., Jha, S., Song, D.: Creating vulnerability signatures using weakest preconditions. In: Proceedings of the 20th IEEE Computer Security Foundations Symposium, CSF 2007 (July 2007)Google Scholar
  6. 6.
    Cadar, C., Dunbar, D., Engler, D.: KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proc. of the 8th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2008 (December 2008)Google Scholar
  7. 7.
    Castro, M., Costa, M., Martin, J.P.: Better bug reporting with better privacy. In: Proc. of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2009 (March 2009)Google Scholar
  8. 8.
    Costa, M., Castro, M., Zhou, L., Zhang, L., Peinado, M.: Bouncer: securing software by blocking bad input. In: Proc. of the 21st ACM Symposium on Operating Systems Principles, SOSP 2007 (October 2007)Google Scholar
  9. 9.
    Costa, M., Crowcroft, J., Castro, M., Rowstron, A., Zhou, L., Zhang, L., Barham, P.: Vigilante: end-to-end containment of Internet worms. In: Proc. of the 20th ACM Symposium on Operating Systems Principles, SOSP 2005 (October 2005)Google Scholar
  10. 10.
    Elbaum, S., Kallakuri, P., Malishevsky, A.G., Rothermel, G., Kanduri, S.: Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Software Testing Verification and Reliability 12 (2003)Google Scholar
  11. 11.
    Ferguson, R., Korel, B.: The chaining approach for software test data generation. ACM Transactions on Software Engineering Methodology (TOSEM) 5(1), 63–86 (1996)CrossRefGoogle Scholar
  12. 12.
    Godefroid, P., Khurshid, S.: Exploring very large state spaces using genetic algorithms. Int. J. Softw. Tools Technol. Transf. 6(2), 117–127 (2004)CrossRefGoogle Scholar
  13. 13.
    Godefroid, P., Klarlund, N., Sen, K.: DART: Directed automated random testing. In: Proc. of the Conference on Programing Language Design and Implementation, PLDI 2005 (June 2005)Google Scholar
  14. 14.
    Gupta, N., Mathur, A.P., Soffa, M.L.: Automated test data generation using an iterative relaxation method. In: Proc. of the ACM Symposium on the Foundations of Software Engineering, FSE 1998 (November 1998)Google Scholar
  15. 15.
    Gupta, R., Jean, M., Mary, H., Soffa, L.: Program slicing-based regression testing techniques. Software Testing Verification and Reliability 6, 83–112 (1996)CrossRefGoogle Scholar
  16. 16.
    Lal, A., Lim, J., Polishchuk, M., Liblit, B.: Path Optimization in Programs and Its Application to Debugging. In: Sestoft, P. (ed.) ESOP 2006. LNCS, vol. 3924, pp. 246–263. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  17. 17.
    Lattner, C., Adve, V.: LLVM: A compilation framework for lifelong program analysis & transformation. In: Proc. of the International Symposium on Code Generation and Optimization, CGO 2004 (March 2004)Google Scholar
  18. 18.
    Li, Z., Harman, M., Hierons, R.M.: Search algorithms for regression test case prioritization. IEEE Transactions on Software Engineering (TSE) 33(4) (2007)Google Scholar
  19. 19.
    Marinescu, P.D., Cadar, C.: make test-zesti: A symbolic execution solution for improving regression testing. In: Proc. of the 34th International Conference on Software Engineering, ICSE 2012 (June 2012)Google Scholar
  20. 20.
    Mockus, A., Weiss, D.M.: Predicting risk of software changes. Bell Labs Technical Journal 5(2), 169–180 (2000)CrossRefGoogle Scholar
  21. 21.
    Nielson, F., Nielson, H.R., Hankin, C.: Principles of Program Analysis. Springer Publishing Company, Incorporated (2010)Google Scholar
  22. 22.
    Notkin, D.: Longitudinal program analysis. In: Proceedings of the ACM Workshop on Program Analysis for Software Tools and Engineering, PASTE 2002 (November 2002)Google Scholar
  23. 23.
    Person, S., Dwyer, M.B., Elbaum, S., Pǎsǎreanu, C.S.: Differential symbolic execution. In: Proc. of the ACM Symposium on the Foundations of Software Engineering, FSE 2008 (November 2008)Google Scholar
  24. 24.
    Rothermel, G., Harrold, M.J.: Analyzing regression test selection techniques. IEEE Transactions on Software Engineering (TSE) 22 (1996)Google Scholar
  25. 25.
    Santelices, R., Chittimalli, P.K., Apiwattanapong, T., Orso, A., Harrold, M.J.: Test-suite augmentation for evolving software. In: Proc. of the 23rd IEEE International Conference on Automated Software Engineering, ASE 2008 (September 2008)Google Scholar
  26. 26.
    Srivastava, A., Thiagarajan, J.: Effectively prioritizing tests in development environment. In: Proc. of the International Symposium on Software Testing and Analysis, ISSTA 2002 (July 2002)Google Scholar
  27. 27.
    Taneja, K., Xie, T., Tillmann, N., de Halleux, J.: eXpress: guided path exploration for efficient regression test generation. In: Proc. of the International Symposium on Software Testing and Analysis, ISSTA 2011 (July 2011)Google Scholar
  28. 28.
    Winstead, J., Evans, D.: Towards differential program analysis. In: Workshop on Dynamic Analysis, WODA 2003 (May 2003)Google Scholar
  29. 29.
    Xu, Z., Cohen, M.B., Rothermel, G.: Factors affecting the use of genetic algorithms in test suite augmentation. In: Proc. of the 12th Annual Conference on Genetic and Evolutionary Computation, GECCO 2010 (July 2010)Google Scholar
  30. 30.
    Xu, Z., Rothermel, G.: Directed test suite augmentation. In: Proc. of the 16th Asia-Pacific Software Engineering Conference, ASPEC 2009 (December 2009)Google Scholar
  31. 31.
    Yin, Z., Yuan, D., Zhou, Y., Pasupathy, S., Bairavasundaram, L.: How do fixes become bugs? In: Proc. of the Joint Meeting of the European Software Engineering Conference and the ACM Symposium on the Foundations of Software Engineering, ESEC/FSE 2011 (September 2011)Google Scholar
  32. 32.
    Zamfir, C., Candea, G.: Execution synthesis: A technique for automated software debugging. In: Proc. of the 5th European Conference on Computer Systems, EuroSys 2010 (April 2010)Google Scholar
  33. 33.
    Zeller, A., Hildebrandt, R.: Simplifying and isolating failure-inducing input. IEEE Transactions on Software Engineering (TSE) 28(2), 183–200 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Paul Dan Marinescu
    • 1
  • Cristian Cadar
    • 1
  1. 1.Department of ComputingImperial College LondonLondonUnited Kingdom

Personalised recommendations