Advertisement

Feasible test path selection by principal slicing

  • István Forgács
  • Antonia Bertolino
Regular Sessions Testing
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1301)

Abstract

We propose to improve current path-wise methods for automatic test data generation by using a new method named principal slicing. This method statically derives program slices with a near minimum number of influencing predicates, using both control and data flow information. Paths derived on principal slices to reach a certain program point are therefore very likely to be feasible. We discuss how our method improves on earlier proposed approaches, both static and dynamic. We also provide an algorithm for deriving principal slices. Then we illustrate the application of principal slicing to testing, considering a specific test criterion as an example, namely branch coverage. The example provided is an optimised method for automated branch testing: not only do we use principal slicing to obtain feasible test paths, but also we use the concept of spanning sets of branches to guide the selection of each next path, which prevents the generation of redundant tests.

Keywords

automatic test data generation ddgraph influencing predicates PDG principal definition slicing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    H. Agrawal, J.R. Horgan, E.W. Krauser and S.A. London. Incremental regression testing, Proc. of the 1993 IEEE Conf, on Software Maintenance, Montreal, Canada, 348–357 (1993)Google Scholar
  2. 2.
    B. Beizer. Software Testing Techniques, Second Edition. Van Nostrand Reinhold, New York. 1990.Google Scholar
  3. 3.
    A. Bertolino and M. Marre. Automatic generation of path covers based on the control flow analysis of computer programs. IEEE Trans. on Software Eng., 20(12):885–899, (1994).Google Scholar
  4. 4.
    L. A. Clarke. A system to generate test data and simbolically execute programs. IEEE Trans. on Software Eng., 2(3):215–222, (1976).Google Scholar
  5. 5.
    R. Conradi, Experience with FORTRAN VERIFIER — A tool for documentation an error diagnosis of FORTRAN-77 programs Proc. 1st European Software Engineering Conference, Strasburg, France 8–11 Sept. 263-275 Springer Verlag LNCS 289 (1987)Google Scholar
  6. 6.
    R. A. DeMillo and A.J. Offutt. Constraint-based automatic test generation. IEEE Trans. on Software Eng., 17(9):900–910, (1991).Google Scholar
  7. 7.
    R. Ferguson and B. Korel. The chaining approach for software test data generation. ACM Trans. on Software Eng. and Meth., 5(1):63–86, (1996)Google Scholar
  8. 8.
    S. Horwitz, T. Reps and D. Binkley. Interprocedural slicing using dependence graphs. ACM Trans. Progr. Lang. Syst., 12(1):26–61, (1990).Google Scholar
  9. 9.
    W. E. Howden. Symbolic testing and the DISSECT symbolic evaluation system. IEEE Trans. on Software Eng., 3(4):266–278, (1977).Google Scholar
  10. 10.
    M. Kamkar. An overview and comparative classification of program slicing techniques. Journal of Systems and Software, 31(3):197–214, (1995)Google Scholar
  11. 11.
    B. Korel. Automated software test data generation. IEEE Trans. on Software Eng., 16(8):870–879, (1990).Google Scholar
  12. 12.
    B. Korel. Dynamic method for software test data generation. J. Softw. Testing Verif. Reliab. 2(4):203–213, (1992)Google Scholar
  13. 13.
    B. Korel and J. Laski. Dynamic slicing of computer programs. Journal of Systems and Software, 13(3):187–195, (1990)Google Scholar
  14. 14.
    M. Marre and A. Bertolino. Reducing and estimating the cost of test coverage criteria. In Proc. ACM/IEEE Int. Conf. Software Eng. ICSE-18, pages 486–494, Berlin, Germany, March 1996.Google Scholar
  15. 15.
    K.J. Ottenstein and L.M. Ottenstein. The program dependence graph in a software development environment. ACM SIGPLAN Notices, 19(5):177–184, (1984)Google Scholar
  16. 16.
    G.A. Venkatesh. The semantic approach to program slicing Proc. of the ACM SIGPLAN'91 Conference on Programming Language Design and Implementation Toronto, Canada, 107–119 (1991)Google Scholar
  17. 17.
    M. Weiser. Program slicing. IEEE Trans. on Software Eng., 10(4):352–357, (1984).Google Scholar
  18. 18.
    E. Weyuker. Translatability and decidability questions for restricted classes of program schemas. SIAM Journal on Computers, 8(4):587–598, (1979)Google Scholar
  19. 19.
    L. J. White and E. I. Cohen. A domain strategy for computer program testing. IEEE Trans. on Software Eng., 6(3):247–257, (1980).Google Scholar
  20. 20.
    D. F. Yates and N. Malevris. Reducing the effects of infeasible paths in branch testing. ACM SIGSOFT Software Engineering Notes, 14(8):48–54, (1989).Google Scholar
  21. 21.
    D. F. Yates and N. Malevris. The effort required by LCSAJ testing: an assessment via a new path generation strategy. Software Quality J., 4(3):227–242, (1995).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • István Forgács
    • 1
  • Antonia Bertolino
    • 2
  1. 1.Computer and Automation InstituteHungarian Academy of SciencesBudapestHungary
  2. 2.Istituto di Elaborazione della InformazioneConsiglio Nazionale delle RicerchePisaItaly

Personalised recommendations