Multi-angle Evaluations of Test Cases Based on Dynamic Analysis

  • Tao Hu
  • Tu Peng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8933)

Abstract

This paper presents dynamic analysis of test cases. By software mining, we get dynamic call tree to reproduce the dynamic function calling relations of test cases and static call graph to describe the static calling relations. Based on graph analysis, we define some related testing models to evaluate the test cases with the execution of software. Compared with the models of evaluating test cases in static analysis, the models given in this paper can be used on large-scale software systems and the quantization can be completed automatically. Experiments prove that these models of dynamic analysis have an excellent performance in improving testing efficiency and also build a foundation of quantization for the management, selection, evaluation of capability to find software defects of test cases. Even more critical is that they can indicate the directions of improvement and management of the test for testers.

Keywords

software testing evaluations of test cases dynamic analysis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Pressman, R.S.: Software Engineering, A Practitioner’s Approach, 4th edn. McGraw-Hill, New York (1997)MATHGoogle Scholar
  2. 2.
    Biswas, S., Mall, R., Satpathy, M., Sukumaran, S.: A model-based regression test selection approach for embedded applications. ACM SIGSOFT Software Engineering Notes 34(4), 1–9 (2009)CrossRefGoogle Scholar
  3. 3.
    Rothermel, G., Harrold, M.J.: Analyzing regression test selection techniques. IEEE Transactions on Software Engineering 22(8), 529–551 (1996)CrossRefGoogle Scholar
  4. 4.
    Feige, U.: A threshold of ln n for approximating set cover. J. ACM 45, 634–652Google Scholar
  5. 5.
    Hochbaum, D.S.: Approximating covering and packing problems: Set cover, vertex cover, independent set, and related problems. In: Approximation Algorithms for NP-hard Problems, pp. 94–143. PWS Publishing Company, Boston (1997)Google Scholar
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
    Chernak, Y.: Validating and Improving Test-Case Effectiveness. IEEE Software 18(1) (January-February 2001)Google Scholar
  13. 13.
    Peng, T.: Program Verification by Reachability Searching over Dynamic Call Tree. In: ADMA 2014 (2014)Google Scholar
  14. 14.
    Behrmann, G., David, A., Larsen, K.G.: A Tutorial on Uppaal: Toolbox for Verification of Realtime System, Department of Computer Science, Aalborg University, DenmarkGoogle Scholar
  15. 15.
    Zhao, C., Kong, J., Zhang, K.: Program Behavior Discover and Verfication: A Graph Grammar Appraoch. IEEE Transaction on Software Engineering (2010)Google Scholar
  16. 16.
  17. 17.
    Kiczales, G., Hilsdale, E., Hugunin, J., Kersten, M., Palm, J., Griswold, W.G.: An Overview of AspectJ. In: Brusilovsky, P., Corbett, A.T., de Rosis, F. (eds.) UM 2003. LNCS, vol. 2702, pp. 327–353. Springer, Heidelberg (2003)Google Scholar
  18. 18.
    Feng, X.: Analysis of AspectJ and its Applications in Reverse Engineering. Master Thesis of Software Engineering, Xian Electrical Science and Tehcnology UniversityGoogle Scholar
  19. 19.
    Ostrand, T., Weyuker, E.: Software testing research and software engineering education. ACM, New York (2010)Google Scholar
  20. 20.
    Bertolino, A.: Software Testing Research and Practice. In: Börger, E., Gargantini, A., Riccobene, E. (eds.) ASM 2003. LNCS, vol. 2589, pp. 1–21. Springer, Heidelberg (2003)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Tao Hu
    • 1
  • Tu Peng
    • 1
  1. 1.School of SoftwareBeijing Institute of TechnologyChina

Personalised recommendations