Skip to main content

Improving Unfamiliar Code with Unit Tests: An Empirical Investigation on Tool-Supported and Human-Based Testing

  • Conference paper
Product-Focused Software Process Improvement (PROFES 2012)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 7343))

Abstract

Software testing is a well-established approach in modern software engineering practice to improve software products by systematically introducing unit tests on different levels during software development projects. Nevertheless existing software solutions often suffer from a lack of unit tests which have not been implemented during development because of time restrictions and/or resource limitations. A lack of unit tests can hinder effective and efficient maintenance processes. Introducing unit tests after deployment is a promising approach for (a) enabling systematic and automation-supported tests after deployment and (b) increasing product quality significantly. An important question is whether unit tests should be introduced manually by humans or automatically generated by tools. This paper focuses on an empirical investigation of tool-supported and human-based unit testing in a controlled experiment with focus on defect detection effectiveness, false positives, and test coverage of two different testing approaches applied to unfamiliar source code. Main results were that (a) individual testing approaches (human-based and tool-supported testing) showed advantages for different defect classes, (b) tools delivered a higher number of false positives, and (c) higher test coverage.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Andrews, J.H., Haldar, S., Lei, Y., Hang Li, F.C.: Tool support for randomized unit testing. In: Proceedings of the 1st Int. Wsh on Random testing, RT 2006, pp. 36–45 (2006)

    Google Scholar 

  2. Bacchelli, A., Ciancarini, P., Rossi, D.: On the Effectiveness of Manual and Automatic Unit Test Generation. In: Proc. of the 3rd Int. Conf. on SE Advances, pp. 252–257 (2008)

    Google Scholar 

  3. Baker, P., Dai, Z.R., Grabowski, J., Haugen, Ø., Schieferdecker, I., Williams, C.: Model-Driven Testing: Using the UML Testing Profile. Springer (2007)

    Google Scholar 

  4. Beck, K., Andres, C.: Extreme Programming Explained: Embrace Change, 2nd edn. Addison-Wesley (2004)

    Google Scholar 

  5. Ciupa, I., Meyer, B., Oriol, M., Pretschner, A.: Finding Faults: Manual Testing vs. Random+ Testing vs. User Reports. In: Proc. of the 19th Int. Symposium on Software Reliability Engineering, pp. 157–166 (2008)

    Google Scholar 

  6. Csallner, D., Smaragdakis, Y.: JCrasher: An Automatic Robustness Tester for Java. Software Pract. Exper. 34, 1025–1050 (2004)

    Article  Google Scholar 

  7. Duvall, M.P., Matyas, S., Glover, A.: Continuous Integration: Improving Software Quality and Reducing Risk. Addison-Wesley (2007)

    Google Scholar 

  8. Erdogmus, H., Morisio, M., Torchiano, M.: On the Effectiveness of the Test-First Approach to Programming. IEEE Trans. Softw. Eng. 31, 226–237 (2005)

    Article  Google Scholar 

  9. Feathers, M.: Working Effectively with Legacy Code. Prentice-Hall, Upper Saddle River (2004)

    Google Scholar 

  10. IEEE Computer Society: Software Engineering Body of Knowledge (SWEBOK) (2004)

    Google Scholar 

  11. Koskela, L.: Test Driven: Practical TDD and Acceptance TDD for Java Developers. Manning Publications (2007)

    Google Scholar 

  12. Larndorfer, S., Ramler, R., Federspiel, C., Lehner, K.: Testing High-Reliability Software for Continuous Casting Steel Plants - Experiences and Lessons Learned from Siemens VAI. In: Proc. of the 33rd EUROMICRO SEAA Conference (2007)

    Google Scholar 

  13. Madeyski, L.: Test-Driven Development: An Empirical Evaluation of Agile Practices. Springer (2010)

    Google Scholar 

  14. Martin, D., Rooksby, J., Rouncefield, M., Sommerville, I.: ’Good’ Organizational Reasons for ’Bad’ Software Testing: An Ethnographic Study of Testing in a Small Software Company. In: Proc. of the 29th ICSE (2007)

    Google Scholar 

  15. Myers, G.J., Sandler, C., Badgett, T., Thomas, T.: The Art of Software Testing, 2nd edn. John Wiley & Sons (2004)

    Google Scholar 

  16. Nagappan, N., Maximilien, E.M., Bhat, T., Williams, L.: Realizing quality improvement through test driven development: results and experiences of four industrial teams. Empirical Software Engineering 13(3), 289–302 (2008)

    Article  Google Scholar 

  17. Oriat, C.: Jartege: A Tool for Random Generation of Unit Tests for Java Classes. In: Reussner, R., Mayer, J., Stafford, J.A., Overhage, S., Becker, S., Schroeder, P.J. (eds.) QoSA-SOQUA 2005. LNCS, vol. 3712, pp. 242–256. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  18. Pacheco, C., Ernst, M.D.: Randoop: Feedback-Directed Random Testing for Java. In: Proc. of the 22nd ACM SIGPLAN Conf. on OOPSLA, pp. 815–816 (2007)

    Google Scholar 

  19. Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-Directed Random Test Generation. In: Proc. of the 29th Int. Conf. on Software Engineering, ICSE, pp. 75–84 (2007)

    Google Scholar 

  20. Pacheco, C., Awasthi, P.: Eclat: Automatic Generation and Classification of Test Inputs. In: Gao, X.-X. (ed.) ECOOP 2005. LNCS, vol. 3586, pp. 504–527. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  21. Schwaber, K.: Agile Project Management with Scrum. Prentice Hall (2004)

    Google Scholar 

  22. Weiss M.A.: Data Structures and Problem Solving Using Java. Addison-Wesley (1997)

    Google Scholar 

  23. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in software engineering: an introduction. Kluwer Academic Publishers (2000)

    Google Scholar 

  24. Wolfmaier, K., Ramler, R., Dobler, H.: Issues in Testing Collection Class Libraries. In: Proc. of the 1st Workshop on Testing Object-Oriented Systems, ETOOS, pp. 4:1–4:8 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Winkler, D., Schmidt, M., Ramler, R., Biffl, S. (2012). Improving Unfamiliar Code with Unit Tests: An Empirical Investigation on Tool-Supported and Human-Based Testing. In: Dieste, O., Jedlitschka, A., Juristo, N. (eds) Product-Focused Software Process Improvement. PROFES 2012. Lecture Notes in Computer Science, vol 7343. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31063-8_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-31063-8_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-31062-1

  • Online ISBN: 978-3-642-31063-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics