On the Effects of Pair Programming on Thoroughness and Fault-Finding Effectiveness of Unit Tests

  • Lech Madeyski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4589)


Code coverage and mutation score measure how thoroughly tests exercise programs and how effective they are, respectively. The objective is to provide empirical evidence on the impact of pair programming on both, thoroughness and effectiveness of test suites, as pair programming is considered one of the practices that can make testing more rigorous, thorough and effective. A large experiment with MSc students working solo and in pairs was conducted. The subjects were asked to write unit tests using JUnit, and to follow test-driven development approach, as suggested by eXtreme Programming methodology. It appeared that branch coverage, as well as mutation score indicator (the lower bound on mutation score), was not significantly affected by using pair programming, instead of solo programming. However, slight but insignificant positive impact of pair programming on mutations score indicator was noticeable. The results do not support the positive impact of pair programming on testing to make it more effective and thorough. The generalization of the results is limited due to the fact that MSc students participated in the study. It is possible that the benefits of pair programming will exceed the results obtained in this experiment for larger, more complex and longer projects.


Laboratory Session Code Coverage User Story Branch Coverage Pair Programming 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Williams, L., Kessler, R.: Pair Programming Illuminated. Addison-Wesley, London (2002)Google Scholar
  2. 2.
    Beck, K., Andres, C.: Extreme Programming Explained: Embrace Change, 2nd edn. Addison-Wesley, London (2004)Google Scholar
  3. 3.
    Williams, L., Kessler, R.R., Cunningham, W., Jeffries, R.: Strengthening the Case for Pair Programming. IEEE Software 17(4), 19–25 (2000)CrossRefGoogle Scholar
  4. 4.
    Williams, L.A., Kessler, R.R.: All I really need to know about pair programming I learned in kindergarten. Communications of the ACM 43(5), 108–114 (2000)CrossRefGoogle Scholar
  5. 5.
    Arisholm, E., Gallis, H., Dybå, T., Sjøberg, D.I.K.: Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise. IEEE Transactions on Software Engineering 33(2), 65–86 (2007)CrossRefGoogle Scholar
  6. 6.
    Beck, K.: Test Driven Development: By Example. Addison-Wesley, London (2002)Google Scholar
  7. 7.
    Kobayashi, O., Kawabata, M., Sakai, M., Parkinson, E.: Analysis of the Interaction between Practices for Introducing XP Effectively. In: ICSE 2006. Proceeding of the 28th International Conference on Software Engineering, pp. 544–550. ACM Press, New York, NY, USA (2006)CrossRefGoogle Scholar
  8. 8.
    Marick, B.: How to Misuse Code Coverage. In: Proceedings of the 16th International Conference on Testing Computer Software (1999),
  9. 9.
    Kaner, C.: Software Negligence and Testing Coverage. In: STAR 1996: Proceedings the 5th International Conference, Software Testing, Analysis and Review. pp. 299–327 (1996)Google Scholar
  10. 10.
    Cai, X., Lyu, M.R.: The Effect of Code Coverage on Fault Detection under Different Testing Profiles. SIGSOFT Softw. Eng. Notes 30(4), 1–7 (2005)Google Scholar
  11. 11.
    Cornett, S.: Code Coverage Analysis (Retrieved 2006),
  12. 12.
    DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on Test Data Selection: Help for the Practicing Programmer. IEEE Computer 11(4), 34–41 (1978)Google Scholar
  13. 13.
    Hamlet, R.G.: Testing Programs with the Aid of a Compiler. IEEE Transactions on Software Engineering 3(4), 279–290 (1977)CrossRefMathSciNetGoogle Scholar
  14. 14.
    Offutt, A.J., Untch, R.H.: Mutation 2000: Uniting the Orthogonal. In: Mutation testing for the new century, pp. 34–44. Kluwer Academic Publishers, Norwell, MA, USA (2001)Google Scholar
  15. 15.
    Zhu, H., Hall, P.A.V., May, J.H.R.: Software Unit Test Coverage and Adequacy. ACM Computing Surveys 29(4), 366–427 (1997)CrossRefGoogle Scholar
  16. 16.
    Walsh, P.J.: A Measure of Test Case Completeness. PhD thesis, Univ. New York (1985)Google Scholar
  17. 17.
    Frankl, P.G., Weiss, S.N., Hu, C.: All-Uses vs Mutation Testing: An Experimental Comparison of Effectiveness. Journal of Systems and Software 38(3), 235–253 (1997)CrossRefGoogle Scholar
  18. 18.
    Offutt, A.J., Pan, J., Tewary, K., Zhang, T.: An Experimental Evaluation of Data Flow and Mutation Testing. Software Practice and Experience 26(2), 165–176 (1996)CrossRefGoogle Scholar
  19. 19.
    Venners, B.: Test-Driven Development. A Conversation with Martin Fowler, Part V (Retrieved 2007),
  20. 20.
    Moore, I.: Jester a JUnit test tester. In: Marchesi, M., Succi, G. (eds.) XP 2001: Proceedings of the 2nd International Conference on Extreme Programming and Flexible Processes in Software Engineering, pp. 84–87 (2001)Google Scholar
  21. 21.
    Gamma, E., Beck, K.: JUnit Project Home Page (Retrieved 2006),
  22. 22.
    Offutt, J., Ma, Y.S., Kwon, Y.R.: An Experimental Mutation System for Java. SIGSOFT Software Engineering Notes 29(5), 1–4 (2004)Google Scholar
  23. 23.
    Ma, Y.S., Offutt, J., Kwon, Y.R.: MuJava: A Mutation System for Java. In: ICSE 2006. Proceeding of the 28th International Conference on Software Engineering, New York, NY, USA, pp. 827–830. ACM Press, New York (2006)CrossRefGoogle Scholar
  24. 24.
    Chevalley, P., Thévenod-Fosse, P.: A mutation analysis tool for Java programs. International Journal on Software Tools for Technology Transfer (STTT) 5(1), 90–103 (2003)CrossRefGoogle Scholar
  25. 25.
    Madeyski, L., Radyk, N.: Judy mutation testing tool project (Retrieved 2007),
  26. 26.
    Offutt, A.J., Lee, A., Rothermel, G., Untch, R.H., Zapf, C.: An Experimental Determination of Sufficient Mutant Operators. ACM Transactions on Software Engineering and Methodology 5(2), 99–118 (1996)CrossRefGoogle Scholar
  27. 27.
    Ammann, P., Offutt, J.: Introduction to Software Testing. (In progress) (2008)Google Scholar
  28. 28.
    Ma, Y.S., Harrold, M.J., Kwon, Y.R.: Evaluation of Mutation Testing for Object-Oriented Programs. In: ICSE 2006. Proceeding of the 28th International Conference on Software Engineering, New York, NY, USA, pp. 869–872. ACM Press, New York (2006)CrossRefGoogle Scholar
  29. 29.
    Nosek, J.T.: The Case for Collaborative Programming. Communications of the ACM 41(3), 105–108 (1998)CrossRefGoogle Scholar
  30. 30.
    Nawrocki, J.R., Wojciechowski, A.: Experimental Evaluation of Pair Programming. In: Richardson, I., Abrahamsson, P., Messnarz, R. (eds.) Software Process Improvement. LNCS, vol. 3792, pp. 269–276. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  31. 31.
    Nawrocki, J.R., Jasiński, M., Olek, L., Lange, B.: Pair Programming vs. Side-by-Side Programming. In: Richardson, I., Abrahamsson, P., Messnarz, R. (eds.) Software Process Improvement. LNCS, vol. 3792, pp. 28–38. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  32. 32.
    Hulkko, H., Abrahamsson, P.: A Multiple Case Study on the Impact of Pair Programming on Product Quality. In: Inverardi, P., Jazayeri, M. (eds.) ICSE 2005, pp. 495–504. ACM Press, New York (2006)Google Scholar
  33. 33.
    Müller, M.M.: Are Reviews an Alternative to Pair Programming? Empirical Software Engineering 9(4), 335–351 (2004)CrossRefGoogle Scholar
  34. 34.
    Madeyski, L.: Preliminary Analysis of the Effects of Pair Programming and Test-Driven Development on the External Code Quality. In: Zieliński, K., Szmuc, T. (eds.) Software Engineering: Evolution and Emerging Technologies Frontiers in Artificial Intelligence and Applications. Frontiers in Artificial Intelligence and Applications, vol. 130, pp. 113–123. IOS Press, Amsterdam (2005)Google Scholar
  35. 35.
    Madeyski, L.: The Impact of Pair Programming and Test-Driven Development on Package Dependencies in Object-Oriented Design — An Experiment. In: Münch, J., Vierimaa, M. (eds.) PROFES 2006. LNCS, vol. 4034, pp. 278–289. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  36. 36.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, Norwell, MA, USA (2000)zbMATHGoogle Scholar
  37. 37.
    Madeyski, L.: Is External Code Quality Correlated with Programming Experience or Feelgood Factor? In: Abrahamsson, P., Marchesi, M., Succi, G. (eds.) XP 2006. LNCS, vol. 4044, pp. 65–74. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  38. 38.
    Shadish, W.R., Cook, T.D., Campbell, D.T.: Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton Mifflin (2002)Google Scholar
  39. 39.
    Cenqua Pty Ltd: Clover project (Retrieved 2006),
  40. 40.
    Cook, T.D., Campbell, D.T.: Quasi-Experimentation: Design and Analysis Issues. Houghton Mifflin Company (1979)Google Scholar
  41. 41.
    Sørumgård, L.S.: Verification of Process Conformance in Empirical Studies of Software Development. PhD thesis, The Norwegian University of Science and Technology (1997)Google Scholar
  42. 42.
    Kitchenham, B., Pfleeger, S.L., Pickard, L., Jones, P., Hoaglin, D.C., Emam, K.E., Rosenberg, J.: Preliminary Guidelines for Empirical Research in Software Engineering. IEEE Transactions on Software Engineering 28(8), 721–734 (2002)CrossRefGoogle Scholar
  43. 43.
    Höst, M., Regnell, B., Wohlin, C.: Using Students as Subjects — A Comparative Study of Students and Professionals in Lead-Time Impact Assessment. Empirical Software Engineering 5(3), 201–214 (2000)zbMATHCrossRefGoogle Scholar
  44. 44.
    Tichy, W.F.: Hints for Reviewing Empirical Work in Software Engineering. Empirical Software Engineering 5(4), 309–312 (2000)CrossRefMathSciNetGoogle Scholar
  45. 45.
    American Psychological Association: Publication manual of the American Psychological Association (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Lech Madeyski
    • 1
  1. 1.Institute of Applied Informatics, Wroclaw University of Technology, Wyb.Wyspianskiego 27, 50370 WroclawPoland

Personalised recommendations