Advertisement

Software Quality Journal

, Volume 25, Issue 3, pp 951–978 | Cite as

An empirical study on the effects of code visibility on program testability

  • Lei MaEmail author
  • Cheng Zhang
  • Bing Yu
  • Hiroyuki Sato
Article

Abstract

Software testability represents the degree of ease with which a software artifact supports testing. When it is easy to detect defects in a program through testing, the program has high testability; otherwise, the testability of the program is low. As an abstract property of programs, testability can be measured by various metrics, which are affected by different factors of design and implementation. In object-oriented software development, code visibility is important to support design principles, such as information hiding. It is widely believed that code visibility has some effects on testability. However, little empirical evidence has been shown to clarify whether and how software testability is influenced by code visibility. We have performed a comprehensive empirical study to shed light on this problem. We first use code coverage as a concrete proxy for testability. We select 27 real-world software programs as subjects and ran two state-of-the-art automated testing tools, Randoop and EvoSuite, on these programs to analyze their code coverage, in comparison with that of developer-written tests. The results show that code visibility does not necessarily have effects on code coverage, but can significantly affect automated tools. Developer-written tests achieve similar coverage on code areas with different visibility, while low code visibility often leads to low code coverage for automated tools. In addition, we have developed two enhanced variants of Randoop that implement multiple strategies to handle code visibility. The results on Randoop variants show that different treatments on code visibility can result in significant differences in code coverage for automated tools. In the second part, our study uses fault detection rate as another concrete measurement of testability. We apply the automated testing tools on 357 real faults. The result of our in-depth analysis is consistent with that of the first part, demonstrating the significant effects of code visibility on program testability.

Keywords

Software testing Software testability Automated testing Code coverage Code visibility Code accessibility Fault detection 

Notes

Acknowledgments

We would like to thank René Just for sharing Defects4J and suggestions on its usage. We thank Michael Ernst and Sai Zhang for the discussion on Randoop. We thank Qingzhou Luo and Cyrille Artho for the discussion on JPF Symbolic PathFinder. We also thank Gordon Fraser and José Campos for the help on the configuration of running EvoSuite. This work was supported by the Fundamental Research Funds for the Central Universities AUGA5710000816, and the National High-tech R&D Program of China (863 Program) 2015AA020101, 2015AA020108.

References

  1. Anand, S., Păsăreanu, C. S., & Visser, W. (2007). Jpf-se: A symbolic execution extension to java pathfinder. In Proceedings of the 13th international conference on tools and algorithms for the construction and analysis of systems (TACAS), Braga, Portugal, pp. 134–138.Google Scholar
  2. Artho, C., & Ma, L. (2016). Classification of randomly generated test cases. In 2016 IEEE 23rd international conference on software analysis, evolution, and reengineering (SANER), Osaka, Japan, Vol. 2, pp. 29–32.Google Scholar
  3. Baresi, L., & Young, M. (2001). Test oracles. Tech. rep., Technical Report CIS-TR-01-02, Dept. of Computer and Information Science, University of Oregon, Eugene, Oregon, USA.Google Scholar
  4. Beust, C., & Suleiman, H. (2007). Next generation java testing: TestNG and advanced concepts. Boston: Addison-Wesley Professional.Google Scholar
  5. Binder, R. V. (1994). Design for testability in object-oriented systems. Communications of the ACM, 37(9), 87–101.CrossRefGoogle Scholar
  6. Bruntink, M. (2003). Testability of object-oriented systems: A metrics-based approach. Master’s thesis, Comput Sci, Univ of Amsterdam.Google Scholar
  7. Budd, T. (1997). An introduction to object-oriented programming. Boston: Addison-Wesley.Google Scholar
  8. Cadar, C., Dunbar, D., & Engler, D. R. (2008). Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. OSDI, 8, 209–224.Google Scholar
  9. EasyMock. (2015). http://easymock.org/.
  10. Emam, K. E., Melo, W., & Machado, J. C. (2001). The prediction of faulty classes using object-oriented design metrics. Journal of Systems and Software, 56(1), 63–75.CrossRefGoogle Scholar
  11. Fraser, G., & Arcuri, A. (2013). Whole test suite generation. IEEE Transactions on Software Engineering, 39(2), 276–291.CrossRefGoogle Scholar
  12. Fraser, G., & Arcuri, A. (2014). A large scale evaluation of automated unit test generation using EvoSuite. ACM Transactions on Software Engineering and Methodology, 24(2), 8.CrossRefGoogle Scholar
  13. Google Guice. (2015). https://github.com/google/guice.
  14. Gosling, J., Joy, B., Steele, G. L. Jr., Bracha, G., & Buckley, A. (2013). The Java Language Specification. Java SE 7 Edition, 1st edn. Addison-Wesley Professional.Google Scholar
  15. Inozemtseva, L., & Holmes, R. (2014). Coverage is not strongly correlated with test suite effectiveness. In Proceedings of the 36th international conference on software engineering (ICSE), Hyderabad, India, pp. 435–445. Google Scholar
  16. Ishii, K., Mi, H., Ma, L., Laokulrat, N., Inami, M., & Igarashi, T. (2013). Pebbles: User-configurable device network for robot navigation (pp. 420–436). Berlin: Springer.Google Scholar
  17. JaCoCo v064 (2015). http://www.eclemma.org/jacoco/.
  18. Jaygarl, H., Chang, C. K., & Kim, S. (2009). Practical extensions of a randomized testing tool. In Proceedings of 33rd annual IEEE international computer software and application conference (COMPSAC), Seattle, USA, pp. 148–153.Google Scholar
  19. Joda Time. (2015). http://www.joda.org/joda-time/.
  20. Jungmayr, S. (1999). Reviewing software artifacts for testability. EuroSTAR, 99, 8–12.Google Scholar
  21. Jungmayr, S. (2002). Design for testability. CONQUEST, pp. 57–64.Google Scholar
  22. Just, R., Jalali, D., & Ernst, M. D. (2014a). Defects4J: A database of existing faults to enable controlled testing studies for Java programs. In Proceedings of the international symposium on software testing and analysis (ISSTA), San Jose, CA, USA, pp. 437–440.Google Scholar
  23. Just, R., Jalali, D., Inozemtseva, L., Ernst, M. D., Holmes, R., & Fraser, G. (2014b). Are mutants a valid substitute for real faults in software testing? In Proceedings of the 22nd ACM SIGSOFT international symposium on foundations of software engineering (FSE), Hong Kong, China, pp. 654–665.Google Scholar
  24. Ma, L., Artho, C., Zhang, C., & Sato, H. (2014). Efficient testing of software product lines via centralization (short paper). In Proceedings of the 2014 international conference on generative programming: Concepts and experiences (GPCE), Vasteras, Sweden, pp. 49–52.Google Scholar
  25. Ma, L., Artho, C., Zhang, C., Sato, H., Gmeiner, J., & Ramler, R. (2015a). GRT: Program-analysis-guided random testing (t). In Proceedings of the 30th IEEE/ACM international conference on automated software engineering (ASE), Nebraska Lincoln, USA, pp. 212–223.Google Scholar
  26. Ma, L., Artho, C., Zhang, C., Sato, H., Gmeiner, J., & Ramler, R. (2015b). GRT: An automated test generator using orchestrated program analysis. In Proceedings of the 30th IEEE/ACM international conference on automated software engineering (ASE), Nebraska Lincoln, USA, pp. 842–847.Google Scholar
  27. Ma, L., Artho, C., Zhang, C., Sato, H., Hagiya, M., Tanabe, Y., et al. (2015c). GRT at the SBST 2015 tool competition. In Proceedings of the 8th international workshop on search-based software testing (SBST), Florence, Italy, pp. 48–51.Google Scholar
  28. Ma, L., Zhang, C., Yu, B., & Sato, H. (2015d). An empirical study on effects of code visibility on code coverage of software testing. In 2015 IEEE/ACM 10th international workshop on automation of software test (AST), Florence, Italy, pp. 80–84.Google Scholar
  29. Ma, L., Zhang, C., Yu, B., & Zhao, J. (2016). Retrofitting automatic testing through library tests reusing. In Proceedings of the IEEE 24th international conference on program comprehension (ICPC), Texas Austin, USA, pp. 1–4.Google Scholar
  30. Massol, V., & Husted, T. (2003). JUnit in action. Greenwich, CT, USA: Manning Publications Co.Google Scholar
  31. Myers, G. J., & Sandler, C. (2004). The art of software testing. New York: Wiley.Google Scholar
  32. Pacheco, C., Lahiri, S. K., Ernst, M. D., & Ball, T. (2007). Feedback-directed random test generation. In Proceedings of the 29th international conference on software engineering (ICSE), Minnesota, USA, pp. 75–84.Google Scholar
  33. Payne, J. E., Alexander, R. T., & Hutchinson, C. D. (1997). Design-for-testability for object-oriented software. Object Magazine, 7(5), 34–43.Google Scholar
  34. Sheskin, D. J. (2007). Handbook of parametric and nonparametric statistical procedures (4th ed.). Boca Raton: Chapman & Hall/CRC.zbMATHGoogle Scholar
  35. Staats, M., Gay, G., & Heimdahl, M. P. E. (2012). Automated oracle creation support, or: How I learned to stop worrying about fault propagation and love mutation testing. In Proceedings of the 34th international conference on software engineering (ICSE), Zurich, Switzerland, pp. 870–880.Google Scholar
  36. The Apache Software Foundation. (2015). http://www.apache.org/.
  37. Yu, B., Ma, L., & Zhang, C. (2015). Incremental web application testing using page object. In the 3rd IEEE workshop on hot topics in web systems and technologies (HotWeb), Washington, DC, USA, pp. 1–6.Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Harbin Institute of TechnologyHarbinChina
  2. 2.University of WaterlooWaterlooCanada
  3. 3.Waseda UniversityTokyoJapan
  4. 4.The University of TokyoTokyoJapan

Personalised recommendations