Advertisement

Automated Recognition of Low-Level Process: A Pilot Validation Study of Zorro for Test-Driven Development

  • Hongbing Kou
  • Philip M. Johnson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3966)

Abstract

Zorro is a system designed to automatically determine whether a developer is complying with the Test-Driven Development (TDD) process. Automated recognition of TDD could benefit the software engineering community in a variety of ways, from pedagogical aids to support the learning of test-driven design, to support for more rigorous empirical studies on the effectiveness of TDD in practice. This paper presents the Zorro system and the results of a pilot validation study, which shows that Zorro was able to recognize test-driven design episodes correctly 89% of the time. The results also indicate ways to improve Zorro’s classification accuracy further, and provide evidence for the effectiveness of this approach to low-level software process recognition.

Keywords

Automate Recognition Developer Behavior Software Engineering Process Software Engineering Community Development Stream 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Johnson, P.M.: Hackystat Framework Home Page, http://www.hackystat.org/
  2. 2.
    Beck, K.: Test-Driven Development by Example. Addison Wesley, Massachusetts (2003)Google Scholar
  3. 3.
    George, B., Williams, L.: An Initial Investigation of Test-Driven Development in Industry. ACM Symposium on Applied Computing 3(1), 23 (2003)Google Scholar
  4. 4.
    Maximilien, E.M., Williams, L.: Accessing Test-Driven Development at IBM. In: Proceedings of the 25th International Conference in Software Engineering, p. 564. IEEE Computer Society, Washington (2003)Google Scholar
  5. 5.
    Kou, H.: Eclipse Screen Recorder Home Page, http://csdl.ics.hawaii.edu/Tools/Esr/
  6. 6.
    Osterweil, L.J.: Unifying microprocess and macroprocess research. In: Proceedings of the International Software Process Workshop, pp. 68–74 (2005)Google Scholar
  7. 7.
    Cook, J.E., Wolf, A.L.: Automating process discovery through event-data analysis. In: ICSE 1995: Proceedings of the 17th international conference on Software engineering, pp. 73–82. ACM Press, New York (1995)CrossRefGoogle Scholar
  8. 8.
    Jensen, C., Scacchi, W.: Experience in discovering, modeling, and reenacting open source software development processes. In: Proceedings of the International Software Process Workshop (2005)Google Scholar
  9. 9.
    Heierman, E., Youngblood, G., Cook, D.: Mining temporal sequences to discover interesting patterns. In: Proceedings of the 2004 International Conference on Knowledge Discovery and Data Mining, Seattle, Washington (2004)Google Scholar
  10. 10.
    George, B., Williams, L.: A Structured Experiment of Test-Driven Development. Information & Software Technology 46(5), 337–342 (2004)CrossRefGoogle Scholar
  11. 11.
    Muller, M.M., Hagner, O.: Experiment about Test-first Programming. In: Empirical Assesment in Software Engineering (EASE), IEEE Computer Society, Los Alamitos (2002)Google Scholar
  12. 12.
    Olan, M.: Unit testing: test early, test often. Journal of Computing Sciences in Colleges, The Consortium for Computing in Small Colleges, 319 (2003)Google Scholar
  13. 13.
    Edwards, S.H.: Using software testing to move students from trial-and-error to reflection-in-action. In: Proceedings of the 35th SIGCSE technical symposium on Computer science education, pp. 26–30. ACM Press, New York (2004)CrossRefGoogle Scholar
  14. 14.
    Geras, A., Smith, M., Miller, J.: A Prototype Empirical Evaluation of Test Driven Development. In: Software Metrics, 10th International Symposium on (METRICS 2004), p. 405. IEEE Computer Society, USA (2004)CrossRefGoogle Scholar
  15. 15.
    Pancur, M., Ciglaric, M.: Towards empirical evaluation of test-driven development in a university environment. In: Proceedings of EUROCON 2003, IEEE, Los Alamitos (2003)Google Scholar
  16. 16.
    Johnson, P.M., Kou, H., Paulding, M.G., Zhang, Q., Kagawa, A., Yamashita, T.: Improving software development management through software project telemetry. IEEE Software (2005)Google Scholar
  17. 17.
    Johnson, P.M., Paulding, M.G.: Understanding HPCS development through automated process and product measurement with Hackystat. In: 2nd Workshop on Productivity and Performance in High-End Computing (P-PHEC) (2005)Google Scholar
  18. 18.
    Johnson, P.M., Kou, H., Agustin, J.M., Zhang, Q., Kagawa, A., Yamashita, T.: Practical automated process and product metric collection and analysis in a classroom setting: Lessons learned from Hackystat-UH. In: Proceedings of the 2004 International Symposium on Empirical Software Engineering, Los Angeles, California (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Hongbing Kou
    • 1
  • Philip M. Johnson
    • 1
  1. 1.Collaborative Software Development Laboratory, Department of Information and Computer SciencesUniversity of HawaiiHonoluluUSA

Personalised recommendations