Empirical Software Engineering

, Volume 16, Issue 2, pp 177–210 | Cite as

A multiple comparative study of test-with development product changes and their effects on team speed and product quality

Article

Abstract

Researchers have typically studied the effects of Test-First Development (TFD), compared to Test-Last Development (TLD), across groups or projects, and for relatively short durations. We defined Test-With Development (TWD) as more general than the fine-grained step of TFD, but also in contrast to the large-grained phase of TLD. With our definition, we performed a multiple comparative study to explore and describe TWD product changes, and the effects of those changes on two attributes related to team speed and two attributes related to product quality, within six long-term open-source projects. Our results indicate that when developers exercised some of their changes with automated tests, on average they made significantly larger changes over time while significantly reducing their product’s complexity. And, when they exercised all of their changes with tests, on average they made significantly smaller changes over time. We interpret these results to indicate that practicing TWD supports faster simplification of a product. Therefore, we conclude that teams that need to reduce their product’s complexity can benefit from practicing TWD.

Keywords

Multiple comparative study Test-with development Team speed Product quality 

References

  1. Abrahamsson P, Hanhineva A, Jaalinoja J (2005) Improving business agility through technical solutions: a case study on test-driven development in mobile software development. In: IFIP 2005: business agility and information technology diffusion, pp 1–17Google Scholar
  2. Auer K, Miller R (2002) Extreme programming applied. Addison-Wesley, BostonGoogle Scholar
  3. Ayewah N, Hovemeyer D, Morgenthaler JD, Penix J, Pugh W (2008) Using static analysis to find bugs. IEEE Softw 25(5):22–29. doi: 10.1109/MS.2008.130 CrossRefGoogle Scholar
  4. Bannerman S, Martin A (2010) A multiple comparative study of test-with development product changes and their effects on team speed and product quality. Tech. Rep. RR-10-03, The University of Oxford. http://www.comlab.ox.ac.uk/techreports/cs/2010.html
  5. Basili VR, Selby RW, Hutchens DH (1986) Experimentation in software engineering. IEEE Trans Softw Eng 12(7):733–743Google Scholar
  6. Basili VR, Shull F, Lanubile F (1999) Building knowledge through families of experiments. IEEE Trans Softw Eng 25(4):456–473. doi: 10.1109/32.799939 CrossRefGoogle Scholar
  7. Beck K (1999a) Embracing change with extreme programming. IEEE Comput 32(10):70–77Google Scholar
  8. Beck K (1999b) Extreme programming explained: embrace change. Addison-WesleyGoogle Scholar
  9. Beck K (2003) Test-driven development. Addison-Wesley, BostonGoogle Scholar
  10. Benbasat I, Goldstein DK, Mead M (1987) The case research strategy in studies of information systems. JSTOR MIS Q 11(3):369–386. doi: 10.2307/248684 CrossRefGoogle Scholar
  11. Bhat T, Nagappan N (2006) Evaluating the efficacy of test-driven development: industrial case studies. In: ISESE ’06: proceedings of the 2006 ACM/IEEE international symposium on empirical software engineering. ACM, New York, pp 356–363. doi: 10.1145/1159733.1159787
  12. Boehm BW, Abts C, Brown AW, Chulani S, Clark BK, Horowitz E, Madachy R, Reifer DJ, Steece B (2000) Software cost estimation with Cocomo II. Prentice Hall, Englewood CliffsGoogle Scholar
  13. Canfora G, Cimitile A, Garcia F, Piattini M, Visaggio CA (2006) Evaluating advantages of test driven development: a controlled experiment with professionals. In: ISESE ’06: proceedings of the 2006 ACM/IEEE international symposium on empirical software engineering. ACM, New York, pp 364–371. doi: 10.1145/1159733.1159788
  14. Card DN, McGarry FE, Page GT (1987) Evaluating software engineering technologies. IEEE Trans Softw Eng 13(7):845–851. doi: 10.1109/TSE.1987.233495 CrossRefGoogle Scholar
  15. Cook JE, Votta LG, Wolf AL (1998) Cost-effective analysis of in-place software processes. IEEE Trans Softw Eng 24(8):650–663. doi: 10.1109/32.707700 CrossRefGoogle Scholar
  16. Damm LO, Lundberg L (2006) Results from introducing component-level test automation and test-driven development. J Syst Softw 79(7):1001–1014. doi: 10.1016/j.jss.2005.10.015 CrossRefGoogle Scholar
  17. Diehl S, Hassan AE, Holt RC (2005) Report on msr 2005: international workshop on mining software repositories. SIGSOFT Softw Eng Notes 30(5):1–3. doi: 10.1145/1095430.1095433 CrossRefGoogle Scholar
  18. Diehl S, Gall H, Pinzger M, Hassan AE (2006) Msr 2006: the 3rd international workshop on mining software repositories. In: ICSE ’06: proceedings of the 28th international conference on software engineering. ACM, New York, pp 1021–1021. doi: 10.1145/1134285.1134483
  19. Erdogmus H, Morisio M, Torchiano M (2005) On the effectiveness of the test-first approach to programming. IEEE Trans Softw Eng 31(3):226–237. doi: 10.1109/TSE.2005.37 CrossRefGoogle Scholar
  20. Fenton N, Pfleeger S (1997) Software metrics: a rigorous and practical approach. PWS Publishing Company, BostonGoogle Scholar
  21. Gall H, Lanza M, Zimmermann T (2007) 4th international workshop on mining software repositories (msr 2007). In: ICSE COMPANION ’07: companion to the proceedings of the 29th international conference on software engineering. IEEE Computer Society, Washington, DC, pp 107–108. doi: 10.1109/ICSECOMPANION.2007.8
  22. George B, Williams L (2003) An initial investigation of test driven development in industry. In: SAC ’03: proceedings of the 2003 ACM symposium on applied computing. ACM, New York, pp 1135–1139. doi: 10.1145/952532.952753
  23. Geras A, Smith M, Miller J (2004) A prototype empirical evaluation of test driven development. In: METRICS ’04: proceedings of the software metrics, 10th international symposium. IEEE Computer Society, Washington, DC, pp 405–416. doi: 10.1109/METRICS.2004.2
  24. Gupta A, Jalote P (2007) An experimental evaluation of the effectiveness and efficiency of the test driven development. In: ESEM ’07: proceedings of the first international symposium on empirical software engineering and measurement. IEEE Computer Society, Washington, DC, pp 285–294. doi: 10.1109/ESEM.2007.20
  25. Hannay JE, Sjoberg DIK, Dyba T (2007) A systematic review of theory use in software engineering experiments. IEEE Trans Softw Eng 33(2):87–107. doi: 10.1109/TSE.2007.12 CrossRefGoogle Scholar
  26. Hassan AE, Holt RC, Mockus A (2005) Report on msr 2004: international workshop on mining software repositories. SIGSOFT Softw Eng Notes 30(1):4. doi: 10.1145/1039174.1039188 CrossRefGoogle Scholar
  27. Janzen DS, Saiedian H (2006) On the influence of test-driven development on software design. In: CSEET ’06: proceedings of the 19th conference on software engineering education & training. IEEE Computer Society, Washington, DC, pp 141–148. doi: 10.1109/CSEET.2006.25
  28. Jeffries R, Anderson A, Hendrickson C (2001) Extreme programming installed. Addison-Wesley, BostonGoogle Scholar
  29. Kitchenham B, Pickard L, Pfleeger SL (1995) Case studies for method and tool evaluation. IEEE Softw 12(4):52–62. doi: 10.1109/52.391832 CrossRefGoogle Scholar
  30. Kitchenham B, Pfleeger S, Pickard L, Jones P, Hoaglin D, Emam KE, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734CrossRefGoogle Scholar
  31. Lanza M, Godfrey MW, Kim S (2008) Msr 2008—5th working conference on mining software repositories. In: ICSE companion ’08: companion of the 30th international conference on software engineering. ACM, New York, pp 1037–1038. doi: 10.1145/1370175.1370235
  32. Lethbridge TC, Sim SE, Singer J (2005) Studying software engineers: data collection techniques for software field studies. Empir Software Eng 10(3):311–341. doi: 10.1007/s10664-005-1290-x CrossRefGoogle Scholar
  33. Madeyski L (2005) Preliminary analysis of the effects of pair programming and test-driven development on the external code quality. In: Proceeding of the 2005 conference on software engineering: evolution and emerging technologies. IOS Press, Amsterdam, pp 113–123Google Scholar
  34. Maximilien EM, Williams L (2003) Assessing test-driven development at IBM. In: ICSE ’03: proceedings of the 25th international conference on software engineering. IEEE Computer Society, Washington, DC, pp 564–569Google Scholar
  35. McCabe TJ (1976) A complexity measure. IEEE Trans Softw Eng 2(4):308–320MathSciNetCrossRefGoogle Scholar
  36. Muller M, Hagner O (2002) Experiment about test-first programming. IEE Proc, Softw 149(5):131–136CrossRefGoogle Scholar
  37. Nagappan N, Maximilien EM, Bhat T, Williams L (2008) Realizing quality improvement through test driven development: results and experiences of four industrial teams. Empir Software Eng 13(3):289–302. doi: 10.1007/s10664-008-9062-z CrossRefGoogle Scholar
  38. Pancur M, Ciglaric M, Trampus M, Vidmar T (2003) Towards empirical evaluation of. test-driven development in a university environment. In: EUROCON 2003: IEEE region 8 proceedings, vol 2. IEEE Press, pp 83–86Google Scholar
  39. Pinsonneault A, Kraemer KL (1993) Survey research methodology in management information systems: an assessment. J Manage Inf Syst 10(2):75–105Google Scholar
  40. Raymond ES (1999) The magic cauldron. In: The cathedral & the bazaar. O’Reilly & Associates, pp 137–194Google Scholar
  41. Robson C (2002) Real world research. Blackwell Publishers, CambridgeGoogle Scholar
  42. Runeson P, Host M (2009) Guidelines for conducting and reporting case study research in software engineering. Empir Software Eng 14(2):131–164CrossRefGoogle Scholar
  43. Sanchez JC, Williams L, Maximilien EM (2007) On the sustained use of a test-driven development practice at IBM. In: AGILE ’07: proceedings of the AGILE 2007. IEEE Computer Society, Washington, DC, pp 5–14. doi: 10.1109/AGILE.2007.43
  44. Siniaalto M, Abrahamsson P (2007) A comparative case study on the impact of test-driven development on program design and test coverage. In: ESEM ’07: proceedings of the first international symposium on empirical software engineering and measurement. IEEE Computer Society, Washington, DC, pp 275–284. doi: 10.1109/ESEM.2007.2
  45. Sjoberg DIK, Hannay JE, Hansen O, Kampenes VB, Karahasanovic A, Liborg N, Rekdal AC (2005) A survey of controlled experiments in software engineering. IEEE Trans Softw Eng 31(9):733–753. doi: 10.1109/TSE.2005.97 CrossRefGoogle Scholar
  46. Williams L, Maximilien EM, Vouk M (2003) Test-driven development as a defect-reduction practice. In: ISSRE ’03: proceedings of the 14th international symposium on software reliability engineering. IEEE Computer Society, Washington, DC, pp 34–45CrossRefGoogle Scholar
  47. Zaidman A, Van Rompaey B, Demeyer S, van Deursen A (2008) Mining software repositories to study co-evolution of production & test code. In: ICST ’08: proceedings of the 2008 international conference on software testing, verification, and validation. IEEE Computer Society, Washington, DC, pp 220–229. doi: 10.1109/ICST.2008.47

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Computing LaboratoryUniversity of OxfordOxfordUK

Personalised recommendations