Advertisement

Applying Test Case Metrics in a Tool Supported Iterative Architecture and Code Improvement Process

  • Matthias Vianden
  • Horst Lichter
  • Tobias Rötschke
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5891)

Abstract

In order to support an iterative architecture and code improvement process a dedicated code analysis tool has been developed. But introducing the process and the tool in a medium sized company is always accompanied by difficulties, like understanding improvement needs. We therefore decided to use test effort as the central communication metaphor for code complexity. Hence, we developed a metric suite to calculate the number of test cases needed for branch coverage and (modified) boundary interior test. This paper introduces the developed metrics and also presents a case study performed at a medium sized software company to evaluate our approach. The main part of this paper is dedicated to the interpretation and comparison of the metrics. Finally their application in an iterative code improvement process is investigated.

Keywords

Metric Test Complexity Code Improvement 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bache, R., Müllenburg, M.: Measuring of Testability as a Basis for Quality Assurance. Software Engineering Journal 5(2), 86–92 (1990)CrossRefGoogle Scholar
  2. 2.
    Bertolino, A., Marré, M.: How Many Paths are Needed for Branch Testing? Journal of Systems and Software 35(2), 95–106 (1996)CrossRefGoogle Scholar
  3. 3.
    Bischofsberger, W.: Werkzeuggestütztes Architektur und Qualitätsmonitoring – von in-house bis offshore. In: Software Engineering Today, SET 2005 (2005)Google Scholar
  4. 4.
    Fowler, M.: Refactoring – Improving the Design of Existing Code. Addison-Wesley, Reading (1999)Google Scholar
  5. 5.
    Howden, W.E.: Methodology for the Generation of Program Test Data. IEEE Transactions on Computers 24(5), 554–560 (1975)zbMATHCrossRefGoogle Scholar
  6. 6.
    McCabe, T.: A Complexity Measure. In: Proceedings of the 2nd International Conference on Software Engineering (ICSE 1976). IEEE Computer Society Press, Los Alamitos (1976)Google Scholar
  7. 7.
    Rötschke, T.: Metamodellbasierte Generierung von kundenspezifischen Software-Leitständen. In: INFORMATIK 2006 – Informatik für Menschen, Bonn, Gesellschaft für Informatik (2006)Google Scholar
  8. 8.
    TIOBE Software BV. TICS Quality Viewer, http://www.tiobe.com/services/viewer.htm
  9. 9.
    Vianden, M.: Entwurf und Realisierung eines Ansatzes zur Modernisierung der Achitektur eines formularbasierten Informationssystems, Diploma Thesis (2008)Google Scholar
  10. 10.
    Vianden, M., Rötschke, T., Berretz, F.: Werkzeugunterstützung für iterative Modernisierungsprozesse. Softwaretechnik Trends 29(2) (2009)Google Scholar
  11. 11.
    Weiss, D., Robert Lai, C.T.: Software Product-Line Engineering – A Family-Based Software Development Process. Addison-Wesley, Reading (1999)Google Scholar
  12. 12.
    Reenskaug, T.: The Original MVC Reports. Xerox Palo Alto Research Laboratory, PARC (1978)Google Scholar
  13. 13.
    Liggesmeyer, P.: Software Qualität – Testen, Analysieren und Verifizieren von Software. Spektrum Verlag (2002)Google Scholar
  14. 14.
    Chen, Y., Probert, R.L., Robeson, K.: Effective Test Metrics for Test Strategy Evolution. In: Lutfiyya, H., Singer, J., Stewart, D.A. (eds.) Proceedings of the 2004 Conference of the Centre For Advanced Studies on Collaborative Research, Markham, Ontario, Canada, October 4-7. IBM Centre for Advanced Studies Conference, pp. 111–123. IBM Press (2004)Google Scholar
  15. 15.
    Harrison, R., Samaraweera, L.G.: Using Test Case Metrics to Predict Code Quality and Effort. SIGSOFT Software Engineering Notes 21(5), 78–88 (1996)CrossRefGoogle Scholar
  16. 16.
    Fenton, N.E.: Software Metrics, A Rigorous Approach. Chapman & Hall, Boca Raton (1991)zbMATHGoogle Scholar
  17. 17.
    Bruntink, M., Deursen, A.V.: Predicting Class Testability using Object-Oriented Metrics. In: Proceedings of the Source Code Analysis and Manipulation, Fourth IEEE international Workshop, SCAM, September 15-16, pp. 136–145. IEEE Computer Society, Washington (2004)CrossRefGoogle Scholar
  18. 18.
    Bertolino, A., Mirandola, R., Peciola, E.: A Case Study in Branch Testing Automation. Journal of Systems and Software 38(1), 47–59 (1997)CrossRefGoogle Scholar
  19. 19.
    Binder, R.V.: Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison-Wesley Longman Publishing Co., Inc., Amsterdam (1999)Google Scholar
  20. 20.
    Software Considerations in Airborne Systems and Equipment Certification – RTCA/DO-178B, RTCA Incorporate, Washington, D.C (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Matthias Vianden
    • 1
  • Horst Lichter
    • 1
  • Tobias Rötschke
    • 2
  1. 1.Research Group Software ConstructionRWTH Aachen UniversityAachenGermany
  2. 2.SOPTIM AGAachenGermany

Personalised recommendations