Advertisement

Continuous Test-Driven Development: A Preliminary Empirical Evaluation Using Agile Experimentation in Industrial Settings

  • Lech Madeyski
  • Marcin Kawalerowicz
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 733)

Abstract

Test-Driven Development (TDD) is an agile software development and design practice popularized by the eXtreme Programming methodology. Continuous Test-Driven Development (CTDD), proposed by the authors, is the recent enhancement of the TDD practice and combines TDD with the continuous testing (CT) practice that recommends background testing. Thus CTDD eliminates the need to manually execute the tests by a developer. This paper uses CTDD research to test out the idea of Agile Experimentation. It is a refined approach performing disciplined scientific research in an industrial setting. The objective of this paper is to evaluate the new CTDD practice versus the well-established TDD practice via a Single Case empirical study involving a professional developer in a real, industrial software development project employing Microsoft .NET. We found that there was a slight (4 min) drop in the mean red-to-green time (i.e., time from the moment when any of the tests fails or the project does not build to the time when the project compiles and all the tests are passing), while the size of the CTDD versus TDD effect was non-zero but small (\(d-index=0.22\)). The recorded results are not conclusive but are in accordance with the intuition. By eliminating the mundane need to execute the tests we have made the developer slightly faster. If the developers that use TDD embrace CTDD it can lead to small improvements in their coding performance that, taking into account a number of developers using TDD, could lead to serious savings in the entire company or industry itself.

References

  1. 1.
    Ambler, S.W.: How agile are you? 2010 Survey Results (2010). http://www.ambysoft.com/surveys/howAgileAreYou2010.html
  2. 2.
    Auerbach, C., Zeitlin, W.: SSDforR: Functions to Analyze Single System Data (2017). R package version 1.4.15Google Scholar
  3. 3.
    Basili, V.R., Caldiera, G., Rombach, H.D.: The goal question metric approach. In: Encyclopedia of Software Engineering. Wiley (1994)Google Scholar
  4. 4.
    Beck, K.: Extreme Programming Explained: Embrace Change. Addison-Wesley, Boston (1999)Google Scholar
  5. 5.
    Beck, K.: Test Driven Development: By Example. Addison-Wesley, Boston (2002)Google Scholar
  6. 6.
    Berłowski, J., Chruściel, P., Kasprzyk, M., Konaniec, I., Jureczko, M.: Highly automated agile testing process: an industrial case study. e-Inf. Softw. Eng. J. 10(1), 69–87 (2016). doi: 10.5277/e-Inf160104. http://www.e-informatyka.pl/attach/e-Informatica_-_Volume_10/eInformatica2016Art4.pdf
  7. 7.
    Bloom, M., Fischer, J., Orme, J.: Evaluating Practice: Guidelines for the Accountable Professional. Pearson/Allyn and Bacon (2008)Google Scholar
  8. 8.
    Dugard, P., File, P., Todman, J.: Single-case and Small-n Experimental Designs: A Practical Guide to Randomization Tests, 2nd edn. Routledge (2012)Google Scholar
  9. 9.
  10. 10.
    Harrison, W.: N = 1: An alternative for software engineering research? (1997). Based upon an editorial of the same title in Volume 2, Number 1 of Empirical Software Engineering (1997). doi:10.1.1.5.2131&rep=rep1&type=pdf. http://citeseerx.ist.psu.edu/viewdoc/download
  11. 11.
    Kazdin, A.E.: Single-Case Research Designs: Methods for Clinical and Applied Settings. Oxford University Press (2011)Google Scholar
  12. 12.
    Kitchenham, B., Madeyski, L., Budgen, D., Keung, J., Brereton, P., Charters, S., Gibbs, S., Pohthong, A.: Robust Statistical Methods for Empirical Software Engineering. Empirical Softw. Eng. 22(2), 579–630 (2017). doi: 10.1007/s10664-016-9437-5
  13. 13.
    Kromrey, J.D., Foster-Johnson, L.: Determining the efficacy of intervention: the use of effect sizes for data analysis in single-subject research. J. Exp. Edu. 65(1), 73–93 (1996). doi: 10.1080/00220973.1996.9943464
  14. 14.
    Kurapati, N., Manyam, V., Petersen, K.: Agile software development practice adoption survey. In: Wohlin, C. (ed.) Agile Processes in Software Engineering and Extreme Programming. Lecture Notes in Business Information Processing, vol. 111, pp. 16–30. Springer, Berlin (2012)CrossRefGoogle Scholar
  15. 15.
    Madeyski, L.: Test-Driven Development: An Empirical Evaluation of Agile Practice. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-04288-1
  16. 16.
    Madeyski, L., Kawalerowicz, M.: Continuous test-driven development - a novel agile software development practice and supporting tool. In: Maciaszek, L., Filipe, J. (eds.) ENASE 2013—Proceedings of the 8th International Conference on Evaluation of Novel Approaches to Software Engineering, pp. 260–267 (2013). doi: 10.5220/0004587202600267. http://madeyski.e-informatyka.pl/download/Madeyski13ENASE.pdf
  17. 17.
    Madeyski, L., Kawalerowicz, M.: Software engineering needs agile experimentation: a new practice and supporting tool. in: software engineering: challenges and solutions. In: Advances in Intelligent Systems and Computing, vol. 504, pp. 149–162. Springer (2017). doi: 10.1007/978-3-319-43606-7_11. http://madeyski.e-informatyka.pl/download/MadeyskiKawalerowicz17.pdf
  18. 18.
    Majchrzak, M., Stilger, Ł.: Experience report: introducing kanban into automotive software project. e-Inf. Softw. Eng. J. 11(1), 41–59 (2017). doi: 10.5277/e-Inf170102. http://www.e-informatyka.pl/attach/e-Informatica_-_Volume_11/eInformatica2017Art2.pdf
  19. 19.
    R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2016)Google Scholar
  20. 20.
    Saff, D., Ernst, M.D.: Reducing wasted development time via continuous testing. In: Fourteenth International Symposium on Software Reliability Engineering, pp. 281–292. Denver, CO (2003)Google Scholar
  21. 21.
    Saff, D., Ernst, M.D.: An experimental evaluation of continuous testing during development. In: ISSTA 2004. Proceedings of the 2004 International Symposium on Software Testing and Analysis, pp. 76–85. MA, USA, Boston (2004)Google Scholar
  22. 22.
    Sochova, Z.: Agile Adoption Survey 2009 (2009). http://soch.cz/AgileSurvey.pdf
  23. 23.
    West, D., Grant, T.: Agile development: mainstream adoption has changed agility (2010). http://programmedevelopment.com/public/uploads/files/forrester_agile_development_mainstream_adoption_has_changed_agility.pdf
  24. 24.
    West, D., Hammond, J.S.: The forrester wave: agile development management tools, q2 2010 (2010). https://www.forrester.com/The+Forrester+Wave+Agile+Development+Management+Tools+Q2+2010/fulltext/-/E-RES48153
  25. 25.
    Zendler, A., Horn, E., Schwärtzel, H., Plödereder, E.: Demonstrating the usage of single-case designs in experimental software engineering. Inf. Softw. Technol. 43(12), 681–691 (2001). doi: 10.1016/S0950-5849(01)00177-X

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Faculty of Computer Science and ManagementWrocław University of Science and TechnologyWrocławPoland
  2. 2.Faculty of Electrical Engineering, Automatic Control and InformaticsOpole University of TechnologyOpolePoland
  3. 3.CODEFUSION Sp. Z o.o.OpolePoland

Personalised recommendations