Does Test-Driven Development Improve the Program Code? Alarming Results from a Comparative Case Study

  • Maria Siniaalto
  • Pekka Abrahamsson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5082)

Abstract

It is suggested that test-driven development (TDD) is one of the most fundamental practices in agile software development, which produces loosely coupled and highly cohesive code. However, how the TDD impacts on the structure of the program code have not been widely studied. This paper presents the results from a comparative case study of five small scale software development projects where the effect of TDD on program design was studied using both traditional and package level metrics. The empirical results reveal that an unwanted side effect can be that some parts of the code may deteriorate. In addition, the differences in the program code, between TDD and the iterative test-last development, were not as clear as expected. This raises the question as to whether the possible benefits of TDD are greater than the possible downsides. Moreover, it additionally questions whether the same benefits could be achieved just by emphasizing unit-level testing activities.

Keywords

Test-Driven Development Test-first Programming Test-first Development Agile Software Development Software Quality 

References

  1. 1.
    Beck, K.: Extreme Programming Explained, 2nd edn. Embrace Change. Addison-Wesley, Boston (2004)Google Scholar
  2. 2.
    Astels, D.: Test-Driven Development: A Practical Guide. Prentice Hall, Upper Saddle River (2003)Google Scholar
  3. 3.
    Beck, K.: Aim, fire. IEEE Software 18(5), 87–89 (2001)CrossRefGoogle Scholar
  4. 4.
    Beck, K.: Test-Driven Development By Example. Addison-Wesley, Boston (2003)Google Scholar
  5. 5.
    Boehm, B., Turner, R.: Balancing Agility and Discipline - A Guide for the Perplexed. Addison-Wesley, Reading (2004)CrossRefGoogle Scholar
  6. 6.
    Stephens, M., Rosenberg, D.: Extreme Programming Refactored: The Case Against XP. Apress, Berkeley (2003)CrossRefGoogle Scholar
  7. 7.
    Siniaalto, M., Abrahamsson, P.: A Comparative Case Study on the Impact of Test-Driven Development on Program Design and Test Coverage. In: First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), pp. 275–284. IEEE Press, New York (2007)CrossRefGoogle Scholar
  8. 8.
    Chidamber, S.R., Kemerer, C.F.: A metrics Suite for Object Oriented Design. IEEE Trans.Software Eng. 20(6), 476–493 (1994)CrossRefGoogle Scholar
  9. 9.
    McCabe, T.J.: A Complexity Measure. IEEE Trans.Software Eng. 2(4), 308–320 (1976)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Martin, R.C.: Agile Software Development: Principles, Patterns, and Practices. Pearson Education, Upper Saddle River (2003)Google Scholar
  11. 11.
    Janzen, D.S., Saiedian, H.: On the Influence of Test-Driven Development on Software Design. In: 19th Conference on Software Engineering Education and Training (CSEET 2006), pp. 141–148. IEEE Press, New York (2006)CrossRefGoogle Scholar
  12. 12.
    Kaufmann, R., Janzen, D.: Implications of Test-Driven Development A Pilot Study. In: 18th Annual ACM Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA 2003), pp. 298–299. ACM, New York (2003)Google Scholar
  13. 13.
    Steinberg, D.H.: The effect of unit tests on entry points, coupling and cohesion in an introductory Java programming course. XP Universe (2001)Google Scholar
  14. 14.
    Müller, M.M.: The Effect of Test-Driven Development on Program Code. In: Abrahamsson, P., Marchesi, M., Succi, G. (eds.) XP 2006. LNCS, vol. 4044, pp. 94–103. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  15. 15.
    Basili, V.R., Melo, W.L.: A validation of Object-Oriented Design Metrics as Quality Indicators. IEEE Trans.Software Eng. 22(10), 751–761 (1996)CrossRefGoogle Scholar
  16. 16.
    Henderson-Sellers, B.: Object-Oriented Metrics: Measures of Complexity. Prentice Hall, Upper Saddle River (1996)Google Scholar
  17. 17.
    Shepperd, M.: A critique of cyclomatic complexity as a softwaremetric. Software Engineering Journal (1988)Google Scholar
  18. 18.
    Salo, O., Abrahamsson, P.: Empirical Evaluation of Agile Software Development: The Controlled Case Study Approach. In: Bomarius, F., Iida, H. (eds.) PROFES 2004. LNCS, vol. 3009, pp. 408–423. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  19. 19.
    Ihme, T., Abrahamsson, P.: Agile Architecting: The Use of Architectural Patterns in Mobile Java Applications. International Journal of Agile Manufacturing 8(2), 97–112 (2005)Google Scholar
  20. 20.
    Höst, M., Regnell, B., Wohlin, C.: Using Students as Subjects—A Comparative Study of Students and Professionals in Lead-Time Impact Assessment. Empirical Software Engineering 5(3), 201–214 (2000)CrossRefMATHGoogle Scholar
  21. 21.
    Runeson, P.: Using students as Experiment Subjects - An Analysis of Graduate and Freshmen Student Data. In: Empirical Assessment in Software Engineering (EASE 2003) (2003)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2008

Authors and Affiliations

  • Maria Siniaalto
    • 1
  • Pekka Abrahamsson
    • 2
  1. 1.F-Secure OyjOuluFinland
  2. 2.VTT Technical Research Centre of FinlandOuluFinland

Personalised recommendations