JExample: Exploiting Dependencies between Tests to Improve Defect Localization

  • Adrian Kuhn
  • Bart Van Rompaey
  • Lea Haensenberger
  • Oscar Nierstrasz
  • Serge Demeyer
  • Markus Gaelli
  • Koenraad Van Leemput
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 9)


To quickly localize defects, we want our attention to be focussed on relevant failing tests. We propose to improve defect localization by exploiting dependencies between tests, using a JUnit extension called JExample. In a case study, a monolithic white-box test suite for a complex algorithm is refactored into two traditional JUnit style tests and to JExample. Of the three refactorings, JExample reports five times fewer defect locations and slightly better performance (-8-12%), while having similar maintenance characteristics. Compared to the original implementation, JExample greatly improves maintainability due the improved factorization following the accepted test quality guidelines. As such, JExample combines the benefits of test chains with test quality aspects of JUnit style testing.


Test Suite Average Execution Time Test Code Domino Effect Original Implementation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Beck, K., Gamma, E.: Test infected: Programmers love writing tests. Java Report 3(7), 51–56 (1998)Google Scholar
  2. 2.
    Belli, F., Crisan, R.: Empirical performance analysis of computer-supported code-reviews. In: Proceedings of the 8th International Symposium on Software Reliability Engineering, pp. 245–255. IEEE Computer Society, Los Alamitos (1997)Google Scholar
  3. 3.
    Beust, C., Suleiman, H.: Next Generation Java Testing: TestNG and Advanced Concepts. Addison-Wesley, Reading (2007)Google Scholar
  4. 4.
    Deursen, A., Moonen, L., Bergh, A., Kok, G.: Refactoring test code. In: Marchesi, M. (ed.) Proceedings of the 2nd International Conference on Extreme Programming and Flexible Processes (XP 2001), University of Cagliari, pp. 92–95 (2001)Google Scholar
  5. 5.
    Feathers, M.C.: Working Effectively with Legacy Code. Prentice-Hall, Englewood Cliffs (2005)Google Scholar
  6. 6.
    Fewster, M., Graham, D.: Building maintainable tests. In: Software Test Automation. ch. 7. ACM Press, New York (1999)Google Scholar
  7. 7.
    Gaelli, M.: Modeling Examples to Test and Understand Software. PhD thesis, University of Berne (November 2006)Google Scholar
  8. 8.
    Gaelli, M., Lanza, M., Nierstrasz, O., Wuyts, R.: Ordering broken unit tests for focused debugging. In: 20th International Conference on Software Maintenance (ICSM 2004), pp. 114–123 (2004)Google Scholar
  9. 9.
    Gaelli, M., Nierstrasz, O., Ducasse, S.: One-method commands: Linking methods and their tests. In: OOPSLA Workshop on Revival of Dynamic Languages (October 2004)Google Scholar
  10. 10.
    Kung, D., Gao, J., Hsia, P., Toyoshima, Y., Chen, C., Kim, Y.-S., Song, Y.-K.: Developing and oject-oriented software testing and maintenance environment. Communications of the ACM 38(10), 75–86 (1995)CrossRefGoogle Scholar
  11. 11.
    Lanza, M., Ducasse, S.: Polymetric views—a lightweight visual approach to reverse engineering. Transactions on Software Engineering (TSE) 29(9), 782–795 (2003)CrossRefGoogle Scholar
  12. 12.
    Meszaros, G.: XUnit Test Patterns - Refactoring Test Code. Addison-Wesley, Reading (2007)Google Scholar
  13. 13.
    Moore, I.: Jester — a JUnit test tester. In: Marchesi, M. (ed.) Proceedings of the 2nd International Conference on Extreme Programming and Flexible Processes (XP 2001), University of Cagliari (2001)Google Scholar
  14. 14.
    Rothermel, G., Untch, R., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. Transactions on Software Engineering 27(10), 929–948 (2001)CrossRefGoogle Scholar
  15. 15.
    Smith, S., Meszaros, G.: Increasing the effectiveness of automated testing. In: Proceedings of the Third XP and Second Agile Universe Conference, pp. 88–91 (2001)Google Scholar
  16. 16.
    Stoerzer, M., Ryder, B.G., Ren, X., Tip, F.: Finding failure-inducing changes in java programs using change classification. In: Proceedings of the 14th SIGSOFT Conference on the Foundations of Software Engineering (FSE 2006) (November 2006)Google Scholar
  17. 17.
    Deursen, A.V., Moonen, L., Zaidman, A.: On the Interplay Between Software Testing and Evolution and its Effect on Program Comprehension. In: Software Evolution. ch. 8. Springer, Heidelberg (2008)Google Scholar
  18. 18.
    Van Rompaey, B., Du Bois, B., Demeyer, S., Rieger, M.: On the detection of test smells: A metrics-based approach for general fixture and eager test. Transactions on Software Engineering 33(12), 800–817 (2007)CrossRefGoogle Scholar
  19. 19.
    Wong, W.E., Horgan, J.R., London, S., Agrawal, H.: A study of effective regression testing in practice. In: Proceedings of the Eighth International Symposium on Software Reliability Engineering, November 1997, pp. 230–238 (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Adrian Kuhn
    • 1
  • Bart Van Rompaey
    • 2
  • Lea Haensenberger
    • 1
  • Oscar Nierstrasz
    • 1
  • Serge Demeyer
    • 2
  • Markus Gaelli
    • 1
  • Koenraad Van Leemput
    • 2
  1. 1.Software Composition GroupUniversity of BernBernSwitzerland
  2. 2.University of AntwerpAntwerpenBelgium

Personalised recommendations