Usage of test-driven development (TDD) is said to lead to better testable programs. However, no study answers either the question how this better testability can be measured nor whether the feature of better testability exists. To answer both questions we present the concept of the controllability of assignments. We studied this metric on various TDD and conventional projects. Assignment controllability seems to support the rules of thumb for testable code, e.g. small classes with low coupling are better testable than large classes with high coupling. And as opposed to the Chidamber and Kemerer metric suite for object-oriented design, controllability of assignments seems to be an indicator whether a project was developed with TDD or not.


Program Code Method Level Testable Code Student Project Object Oriented Design 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Müller, M., Hagner, O.: Experiment about test-first programming. IEE Proceedings Software 149(5), 131–136 (2002)CrossRefGoogle Scholar
  2. 2.
    Pancur, M., Ciglaric, M., Trampus, M., Vidmar, T.: Towards empirical evaluation of test-driven development in a university environment. In: EUROCON 2003. Computer as a Tool. The IEEE Region 8, vol. 2, pp. 83–86 (2003)Google Scholar
  3. 3.
    George, B., Williams, L.: An initial investigation of test driven development in industry. In: ACM symposium on Applied computing, Melbourne, Florida, USA, pp. 1135–1139 (2003)Google Scholar
  4. 4.
    Geras, A., Smith, M., Miller, J.: A prototype empirical evaluation of test driven development. In: International Symposium on Software Metrics (Metrics), Chicago, Illinois, USA, pp. 405–416 (2004)Google Scholar
  5. 5.
    Erdogmus, H., Morisio, M., Torchiano, M.: On the effectiveness of the test-first approach to programming. IEEE Transactions on Software Engineering 31(3), 226–237 (2005)CrossRefGoogle Scholar
  6. 6.
    Beck, K.: Aim, fire. IEEE Software 18(5), 87–89 (2001)CrossRefGoogle Scholar
  7. 7.
    Binder, R.: Design for testability in object-oriented systems. Communications of the ACM 37(9), 87–101 (1994)CrossRefGoogle Scholar
  8. 8.
    Chidamber, S., Kemerer, C.: A metrics suite for object oriented design. IEEE Transactions on Software Engineering 20(6), 476–493 (1994)CrossRefGoogle Scholar
  9. 9.
    Abramovici, M., Breuer, M., Friedman, A.: Digital Systems Testing and Testable Design. Computer Science Press, Rockville (1990)Google Scholar
  10. 10.
    Apache: Byte code engineering library (BCEL),
  11. 11.
    Canoo: Webtest,
  12. 12.
  13. 13.
  14. 14.
    Hollander, M., Wolfe, D.: Noparametric Statistical Methods, 2nd edn. John Wiley & Sons, Chichester (1999)Google Scholar
  15. 15.
    Kleinbaum, D.: Logistic regression: a self-learning text. Springer, Heidelberg (1994)zbMATHGoogle Scholar
  16. 16.
    Wilson, D.: Teaching xp: A case study. In: XP Universe, Raleigh, NC, USA (2001)Google Scholar
  17. 17.
    Müller, M., Link, J., Sand, R., Malpohl, G.: Extreme programming in curriculum: Experiences from academia and industry. In: Eckstein, J., Baumeister, H. (eds.) XP 2004. LNCS, vol. 3092, pp. 294–302. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Matthias M. Müller
    • 1
  1. 1.Fakultät für InformatikUniversität KarlsruheKarlsruheGermany

Personalised recommendations