Abstract
Software testing is a well-established approach in modern software engineering practice to improve software products by systematically introducing unit tests on different levels during software development projects. Nevertheless existing software solutions often suffer from a lack of unit tests which have not been implemented during development because of time restrictions and/or resource limitations. A lack of unit tests can hinder effective and efficient maintenance processes. Introducing unit tests after deployment is a promising approach for (a) enabling systematic and automation-supported tests after deployment and (b) increasing product quality significantly. An important question is whether unit tests should be introduced manually by humans or automatically generated by tools. This paper focuses on an empirical investigation of tool-supported and human-based unit testing in a controlled experiment with focus on defect detection effectiveness, false positives, and test coverage of two different testing approaches applied to unfamiliar source code. Main results were that (a) individual testing approaches (human-based and tool-supported testing) showed advantages for different defect classes, (b) tools delivered a higher number of false positives, and (c) higher test coverage.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Andrews, J.H., Haldar, S., Lei, Y., Hang Li, F.C.: Tool support for randomized unit testing. In: Proceedings of the 1st Int. Wsh on Random testing, RT 2006, pp. 36–45 (2006)
Bacchelli, A., Ciancarini, P., Rossi, D.: On the Effectiveness of Manual and Automatic Unit Test Generation. In: Proc. of the 3rd Int. Conf. on SE Advances, pp. 252–257 (2008)
Baker, P., Dai, Z.R., Grabowski, J., Haugen, Ø., Schieferdecker, I., Williams, C.: Model-Driven Testing: Using the UML Testing Profile. Springer (2007)
Beck, K., Andres, C.: Extreme Programming Explained: Embrace Change, 2nd edn. Addison-Wesley (2004)
Ciupa, I., Meyer, B., Oriol, M., Pretschner, A.: Finding Faults: Manual Testing vs. Random+ Testing vs. User Reports. In: Proc. of the 19th Int. Symposium on Software Reliability Engineering, pp. 157–166 (2008)
Csallner, D., Smaragdakis, Y.: JCrasher: An Automatic Robustness Tester for Java. Software Pract. Exper. 34, 1025–1050 (2004)
Duvall, M.P., Matyas, S., Glover, A.: Continuous Integration: Improving Software Quality and Reducing Risk. Addison-Wesley (2007)
Erdogmus, H., Morisio, M., Torchiano, M.: On the Effectiveness of the Test-First Approach to Programming. IEEE Trans. Softw. Eng. 31, 226–237 (2005)
Feathers, M.: Working Effectively with Legacy Code. Prentice-Hall, Upper Saddle River (2004)
IEEE Computer Society: Software Engineering Body of Knowledge (SWEBOK) (2004)
Koskela, L.: Test Driven: Practical TDD and Acceptance TDD for Java Developers. Manning Publications (2007)
Larndorfer, S., Ramler, R., Federspiel, C., Lehner, K.: Testing High-Reliability Software for Continuous Casting Steel Plants - Experiences and Lessons Learned from Siemens VAI. In: Proc. of the 33rd EUROMICRO SEAA Conference (2007)
Madeyski, L.: Test-Driven Development: An Empirical Evaluation of Agile Practices. Springer (2010)
Martin, D., Rooksby, J., Rouncefield, M., Sommerville, I.: ’Good’ Organizational Reasons for ’Bad’ Software Testing: An Ethnographic Study of Testing in a Small Software Company. In: Proc. of the 29th ICSE (2007)
Myers, G.J., Sandler, C., Badgett, T., Thomas, T.: The Art of Software Testing, 2nd edn. John Wiley & Sons (2004)
Nagappan, N., Maximilien, E.M., Bhat, T., Williams, L.: Realizing quality improvement through test driven development: results and experiences of four industrial teams. Empirical Software Engineering 13(3), 289–302 (2008)
Oriat, C.: Jartege: A Tool for Random Generation of Unit Tests for Java Classes. In: Reussner, R., Mayer, J., Stafford, J.A., Overhage, S., Becker, S., Schroeder, P.J. (eds.) QoSA-SOQUA 2005. LNCS, vol. 3712, pp. 242–256. Springer, Heidelberg (2005)
Pacheco, C., Ernst, M.D.: Randoop: Feedback-Directed Random Testing for Java. In: Proc. of the 22nd ACM SIGPLAN Conf. on OOPSLA, pp. 815–816 (2007)
Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-Directed Random Test Generation. In: Proc. of the 29th Int. Conf. on Software Engineering, ICSE, pp. 75–84 (2007)
Pacheco, C., Awasthi, P.: Eclat: Automatic Generation and Classification of Test Inputs. In: Gao, X.-X. (ed.) ECOOP 2005. LNCS, vol. 3586, pp. 504–527. Springer, Heidelberg (2005)
Schwaber, K.: Agile Project Management with Scrum. Prentice Hall (2004)
Weiss M.A.: Data Structures and Problem Solving Using Java. Addison-Wesley (1997)
Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in software engineering: an introduction. Kluwer Academic Publishers (2000)
Wolfmaier, K., Ramler, R., Dobler, H.: Issues in Testing Collection Class Libraries. In: Proc. of the 1st Workshop on Testing Object-Oriented Systems, ETOOS, pp. 4:1–4:8 (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Winkler, D., Schmidt, M., Ramler, R., Biffl, S. (2012). Improving Unfamiliar Code with Unit Tests: An Empirical Investigation on Tool-Supported and Human-Based Testing. In: Dieste, O., Jedlitschka, A., Juristo, N. (eds) Product-Focused Software Process Improvement. PROFES 2012. Lecture Notes in Computer Science, vol 7343. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31063-8_22
Download citation
DOI: https://doi.org/10.1007/978-3-642-31063-8_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-31062-1
Online ISBN: 978-3-642-31063-8
eBook Packages: Computer ScienceComputer Science (R0)