Skip to main content
Log in

Predicting different levels of the unit testing effort of classes using source code metrics: a multiple case study on open-source software

  • Original Paper
  • Published:
Innovations in Systems and Software Engineering Aims and scope Submit manuscript

Abstract

Nowadays, the growth in size and complexity of object-oriented software systems bring new software quality assurance challenges. Applying equally testing (quality assurance) effort to all classes of a large and complex object-oriented software system is cost prohibitive and not realistic in practice. So, predicting early the different levels of the unit testing effort required for testing classes can help managers to: (1) better identify critical classes, which will involve a relatively high-testing effort, on which developers and testers have to focus to ensure software quality, (2) plan testing activities, and (3) optimally allocate resources. In this paper, we investigate empirically the ability of a Quality Assurance Indicator (Qi), a synthetic metric that we proposed in a previous work, to predict different levels of the unit testing effort of classes in object-oriented software systems. The unit testing effort of classes is addressed from the perspective of unit test cases construction. We focused particularly on the effort involved in writing the code of unit test cases. To capture the involved unit testing effort of classes, we used four metrics that quantify different characteristics related to the code of corresponding unit test cases. We used Means and K-Means-based categorizations to group software classes into five categories according to the involved unit testing effort. We performed an empirical analysis using data collected from eight open-source Java software systems from different domains, for which the JUnit test cases were available. To evaluate the ability of the Qi metric to predict different levels of the unit testing effort of classes, we used three modeling techniques: the univariate logistic regression, the univariate linear regression, and the multinomial logistic regression. The performance of the models based on the Qi metric has been compared to the performance of the models based on various well-known object-oriented source code metrics. We used different evaluation criteria to compare the prediction models. Results indicate that the models based on the Qi metric have more promising prediction potential than those based on traditional object-oriented metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Yeh PL, Lin JC (1998) Testability measurement derived from data flow analysis. In: 2nd Euromicro conference on software maintenance and reengineering, Florence

  2. Baudry B, Le Traon B, Sunyé G (2003) Testability analysis of a UML class diagram. In: 9th International software metrics symposium, Sydney

  3. Baudry B, Le Traon Y, Sunyé G, Jézéquel JM (2003) Measuring and improving design patterns testability. In: 9th International software metrics symposium, Sydney

  4. Bruntink M, Deursen AV (2006) An empirical study into class testability. J Syst Softw 79(9):1219–1232

    Article  Google Scholar 

  5. Zhao L (2006) A new approach for software testability analysis. In: 28th International conference on software engineering, Shanghai

  6. Wnuk K, Regnell B, Berenbach B (2011) Scaling up requirements engineering—exploring the challenges of increasing size and complexity in market-driven software development. In: Proceedings of the 17th international working conference on requirements engineering: foundation for software quality, REFSQ’11. Springer, Berlin, pp 54–59

  7. Henderson-Sellers B (1996) Object-oriented metrics measures of complexity. Prentice-Hall, Upper Saddle River

    Google Scholar 

  8. Chidamber SR, Kemerer CF (1994) A metrics suite for OO design. IEEE Trans Softw Eng 20(6):476–493

    Article  Google Scholar 

  9. Bruntink M, Deursen AV (2004) Predicting class testability using object-oriented metrics. In: 4th International workshop on source code analysis and manipulation, Chicago

  10. Gupta V, Aggarwal KK, Singh Y (2005) A fuzzy approach for integrated measure of object-oriented software testability. J Comput Sci 1(2):276–282

    Article  Google Scholar 

  11. Singh Y, Kaur A, Malhota R (2008) Predicting testability effort using artificial neural network. In: Proceedings of the world congress on engineering and computer science, San Francisco, pp 22–24

  12. Singh Y, Saha A (2010) Predicting testability of eclipse: a case study. J Softw Eng 4(2):122–136

    Article  Google Scholar 

  13. Badri L, Badri M, Touré F (2010) Exploring empirically the relationship between lack of cohesion and testability in object-oriented systems. In: Kim T-H et al (eds) Advances in software engineering, communications in computer and information science, vol 117. Springer, Berlin

    Google Scholar 

  14. Badri L, Badri M, Touré F (2011) An empirical analysis of lack of cohesion metrics for predicting testability of classes. Int J Softw Eng Appl 5(2):69–86

    Google Scholar 

  15. Badri M, Touré F (2011) Empirical analysis for investigating the effect of control flow dependencies on testability of classes. In: 23rd International conference on software engineering and knowledge engineering, Miami Beach

  16. Badri M, Touré F (2012) Evaluating the effect of control flow on the unit testing effort of classes: an empirical analysis. Adv Softw Eng 2012:964064. doi:10.1155/2012/964064

  17. Badri M, Toure F (2012) Empirical analysis of object oriented design metrics for predicting unit testing effort of classes. J Softw Eng Appl (JSEA) 5(7):513

    Article  Google Scholar 

  18. Zhou Y, Leung H, Song Q, Zhap J, Lu H, Chen L, Xu B (2012) An in-depth investigation into the relationships between structural metrics and unit testability in object-oriented systems. Sci China Inf Sci 55(12):2800–2815

    Article  Google Scholar 

  19. Zhao L, Hayes J (2006) Predicting classes in need of refactoring: an application of static metrics. In: Proceedings of the 2nd international PROMISE, workshop, Philadelphia

  20. Al Dallal J (2012) Constructing models for predicting extract subclass refactoring opportunities using object-oriented quality metrics. J Inf Softw Technol 54(10):1125–1141

    Article  Google Scholar 

  21. Al Dallal J, Briand LC (2012) A precise method–method interaction-based cohesion metric for object-oriented classes. ACM Trans Softw Eng Methodol TOSEM 21(2):8

    Google Scholar 

  22. Coleman DM, Ash D, Lowther B, Oman PW (1994) Using metrics to evaluate software system maintainability. Computer 27(8):44–49

    Article  Google Scholar 

  23. Lerthathairat P, Prompoon N (2011) An approach for source code classification using software metrics and fuzzy logic to improve code quality with refactoring techniques. In: International conference on software engineering and computer systems ICSECS 2011: software engineering and computer systems, pp 478–492

  24. Higo Y, Kusumoto S, Inoue K (2008) A metric-based approach to identifying refactoring opportunities for merging code clones in a Java software system. J Softw Evolut Process 20(6):435–461

    Google Scholar 

  25. Badri M, Badri L, Toure F (2009) Empirical analysis of object-oriented design metrics: towards a new metric using control flow paths and probabilities. J Object Technol 8(6):123–142

    Article  Google Scholar 

  26. Toure F, Badri M, Lamontagne L (2014) Towards a metrics suite for JUnit Test Cases. In: The twenty-sixth international conference on software engineering and knowledge engineering (SEKE 2014), Vancouver

  27. Toure F, Badri M, Lamontagne L (2014) A metrics suite for JUnit test code: a multiple case study on open source software. J Softw Eng Res Dev 2:14

    Article  Google Scholar 

  28. Wu J (2012) Advances in K-means clustering. Springer theses: recognizing outstanding Ph.D. Research 1st edition

  29. IEEE (1990) IEEE standard glossary of software engineering terminology. IEEE CSP, New York

    Google Scholar 

  30. ISO/IEC 9126 (1991) Software engineering product quality

  31. Fenton N, Pfleeger SL (1997) Software metrics: a rigorous and practical approach. PWS Publishing Company, Boston

    Google Scholar 

  32. Voas JM, Miller KW (1995) Software testability: the new verification. IEEE Softw 12(3):17–28

    Article  Google Scholar 

  33. Bache R, Mullerburg M (1990) Measures of testability as a basis for quality assurance. Softw Eng J 5(2):86–92

    Article  Google Scholar 

  34. Jungmayr S (2002) Identifying test-critical dependencies. In: IEEE international conference on software maintenance, pp 404–413

  35. Briand LC, Labiche Y, Sun H (2003) Investigating the use of analysis contracts to improve the testability of object-oriented code. Softw Pract Exp 33(7):637–672

    Article  MATH  Google Scholar 

  36. Gyimothy T, Ferenc R, Siket I (2005) Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans Softw Eng 3(10):897–910

    Article  Google Scholar 

  37. Marcus A, Poshyvanyk D, Ferenc R (2008) Using the conceptual cohesion of classes for fault prediction in object-oriented systems. IEEE Trans Softw Eng 34(2):287–300

    Article  Google Scholar 

  38. El Emam K, Melo W (1999) The prediction of faulty classes using object-oriented design metrics. National Research Council of Canada, NRC/ERB 1064

  39. Zhou Y, Xu B, Leung H (2010) On the ability of complexity metrics to predict fault-prone classes in object-oriented systems. J Syst Softw 83(4):660–674. doi:10.1016/j.jss.2009.11.704

    Article  Google Scholar 

  40. Yu L, Mishra A (2012) Experience in predicting fault-prone software modules using complexity metrics. Qual Technol Quant Manag 9:421–433

    Article  Google Scholar 

  41. Lov K, Sanjay M, Santanu R (2017) An empirical analysis of the effectiveness of software metrics and fault prediction model for identifying faulty classes. Comput Stand Interfaces. doi:10.1016/j.csi.2017.02.003

    Google Scholar 

  42. Srivastava S, Kumar R (2013) Indirect method to measure software quality using CK-OO suite. In: International conference on intelligent systems and signal processing (ISSP). IEEE, pp 1–2

  43. Basili VR, Briand LC, Melo W (1996) A validation of object-oriented design metrics as quality indicators. IEEE Trans Softw Eng 22(10):751–761

    Article  Google Scholar 

  44. Chidamber SR, Darcy DP, Kemerer CF (1998) Managerial use of metrics for object-oriented software: an exploratory analysis. IEEE Trans Softw Eng 24(8):629–637

    Article  Google Scholar 

  45. McCabe TJ (1976) A complexity measure. IEEE Trans Softw Eng 4:308–320

    Article  MathSciNet  MATH  Google Scholar 

  46. Binder RV (1994) Design for testability in object-oriented systems. Commun ACM 37(9):87–101

    Article  Google Scholar 

  47. Mockus A, Nagappan N, Dinh-Trong TT (2009) Test coverage and post-verification defects: a multiple case study. In: Proceedings of the 3rd international symposium on empirical software engineering and measurement (ESEM ’09), pp 291–301

  48. Nemer D (2013) IMPEX: an approach to analyze source code changes on software run behavior. J Softw Eng Appl 6(4):157–167

    Article  Google Scholar 

  49. Meszaros G (2007) XUnit test patterns: refactoring test code. Addison-Wesley, Reading

    Google Scholar 

  50. Fewster M, Graham D (1999) Software Test Automation: effective use of test execution tools . Addison-Wesley, Harlow, Essex, UK

  51. Rompaey BV, Demeyer S (2009) Establishing traceability links between unit test cases and units under test. In: Proceedings of the 13th European conference on software maintenance and reengineering (CSMR ’09), pp 209–218

  52. Qusef A, Bavota G, Oliveto R, De Lucia A, Binkley D (2011) SCOTCH: test-to-code traceability using slicing and conceptual coupling. In: Proceedings of the international conference on software maintenance (ICSM ’11)

  53. Toure F, Badri M, Lamontagne L (2017) Investigating the prioritization of unit testing effort using software metrics. In: Proceedings of the 12th international conference on evaluation of novel approaches to software engineering (ENASE’17), vol 1. ENASE, pp 69–80

  54. Zhou Y, Leung H (2006) Empirical analysis of object-oriented design metrics for predicting high and low severity faults. IEEE Trans Softw Eng 32(10):771–789

    Article  Google Scholar 

  55. Aggarwal KK, Singh Y, Arvinder K, Ruchika M (2009) Empirical analysis for investigating the effect of object-oriented metrics on fault proneness: a replicated case study. Softw Process Improv Pract 16(1):39–62

    Article  Google Scholar 

  56. Singh Y, Kaur A, Malhota R (2010) Empirical validation of object-oriented metrics for predicting fault proneness models. Softw Qual J 18(1):3–35

    Article  Google Scholar 

  57. Briand LC, Daly J, Wuest J (1998) A unified framework for cohesion measurement in object-oriented systems. Empir Softw Eng Int J 3(1):65–117

    Article  Google Scholar 

  58. Briand LC, Wust J, Daly J, Porter V (2000) Exploring the relationship between design measures and software quality in object-oriented systems. J Syst Softw 51(3):245–273

    Article  Google Scholar 

  59. Hosmer D, Lemeshow S (2000) Applied logistic regression, 2nd edn. Wiley, London

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported partially by NSERC (Natural Sciences and Engineering Research Council of Canada) grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fadel Toure.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Toure, F., Badri, M. & Lamontagne, L. Predicting different levels of the unit testing effort of classes using source code metrics: a multiple case study on open-source software. Innovations Syst Softw Eng 14, 15–46 (2018). https://doi.org/10.1007/s11334-017-0306-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11334-017-0306-1

Keywords

Navigation