Skip to main content
Log in

Regression test selection in test-driven development

  • Published:
Automated Software Engineering Aims and scope Submit manuscript

Abstract

The large number of unit tests produced in the test-driven development (TDD) method and the iterative execution of these tests extend the regression test execution time in TDD. This study aims to reduce test execution time in TDD. We propose a TDD-based approach that creates traceable code elements and connects them to relevant test cases to support regression test selection during the TDD process. Our proposed hybrid technique combines text and syntax program differences to select related test cases using the nature of TDD. We use a change detection algorithm to detect program changes. Our experience is reported with a tool called RichTest, which implements this technique. In order to evaluate our work, seven TDD projects have been developed. The implementation results indicate that the RichTest plugin significantly decreases the number of test executions and also the time of regression testing despite considering the overhead time. The test suite effectively enables fault detection because the selected test cases are related to the modified partitions. Moreover, the test cases cover the entire modified partitions; accordingly, the selection algorithm is safe. The concept is particularly designed for the TDD method. Although this idea is applicable in any programming language, it is already implemented as a plugin in Java Eclipse.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Algorithm 1
Algorithm 2
Algorithm 3
Algorithm 4
Algorithm 5
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Notes

  1. https://github.com/MafiZo/RichTest.git.

  2. https://www.tiobe.com/tiobe-index/.

  3. Logical Structural Diff.

  4. Statements such as synchronized, do, try, for, if, while, and switch.

  5. Simple Theorem Prover.

  6. Code normalization is a semantic-preserving transformation.

  7. Each copy of the program a developer saves. It is not necessarily a new issue/version of the program.

  8. Help → Install New Software, and also should set Window → Preferences as Dependency folder address.

  9. Available from Window → Show View → Other → RichTest.

  10. Available from Window → Preferences → RichTest.

  11. Abstract Syntax Tree.

  12. sw itch, while, if, for, foreach, and try.

  13. result = autorank(data, alpha = 0.05, verbose = False).

References

  • Ammann, P., Offutt, J.: Introduction to software testing. Cambridge University Press, Cambridge (2008)

    Book  Google Scholar 

  • Apiwattanapong, T., Orso, A., Harrold, M.J.: JDiff: a differencing technique and tool for object-oriented programs. Autom. Softw. Eng.. Softw. Eng. 14(1), 3–36 (2007)

    Article  Google Scholar 

  • Archambault, D.: Structural differences between two graphs through hierarchies. In: Proceedings of Graphics Interface, Kelowna (2009)

  • Asaduzzaman, M., Roy, C., Schneider, K., Di Penta, M.: LHDiff: a language-independent hybrid approach for tracking source code lines. In: IEEE International Conference on Software Maintenance, Eindhoven (2013)

  • Astels, D.: Test Driven Development: A Practical Guide. Prentice-Hall/Pearson Education, New Jersey (2003)

    Google Scholar 

  • Beck, K.: Test Driven Development: By Example. Addison-Wesley, Boston (2002)

    Google Scholar 

  • Beller, M., Georgios, G., Annibale, P.: Developer testing in the ide: patterns, beliefs, and behavior. IEEE Trans. Softw. Eng.softw. Eng. 45(3), 261–284 (2017)

    Article  Google Scholar 

  • Beningo, J.: Testing, verification, and test-driven development. In: Embedded Software Design: A Practical Approach to Architecture, Processes, and Coding Techniques, pp. 197–218. Apress, Berkeley, CA (2022)

  • Binkley, D.: Using semantic differencing to reduce the cost of regression testing. In: IEEE Conference on Software Maintenance, Orlando (1992)

  • Biswas, S., Mall, R., Satpathy, M., Sukumaran, S.: Regression test selection techniques: a survey. Informatica 35(3), 289–321 (2011)

    Google Scholar 

  • Borle, N.C., Feghhi, M., Stroulia, E., Greiner, R., Hindle, A.: Analyzing the effects of test driven development in GitHub. Empir. Softw. Eng.. Softw. Eng. 23(4), 1931–1958 (2018)

    Article  Google Scholar 

  • Canfora, G., Cerulo, L., Penta, M.D.: LDiff: an enhanced line differencing tool. In: 31st International Conference on Software Engineering. IEEE Computer Society, Washington (2009)

  • Cibulski, H., Amiram, Y.: Regression test selection techniques for test-driven development. In: IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, Washington (2011)

  • Dalton, J.: Test-driven development. In: Great Big Agile, pp. 263–264. Apress, Berkeley (2019)

  • Debin, G., Reiter, M.K., Song, D.: Binhunt: automatically finding semantic differences in binary programs. In: International Conference on Information and Communications Security. Springer, Berlin (2008)

  • Erdogmus, H., Maurizio, M., Marco, T.: On the effectiveness of the test-first approach to programming. IEEE Trans. Softw. Eng.softw. Eng. 31(3), 226–237 (2005)

    Article  Google Scholar 

  • Falleri, J., Floréal, M., Xavier, B., Matias, M., Martin, M.: Fine-grained and accurate source code differencing. In: 29th ACM/IEEE International Conference on Automated Software Engineering. Västerås (2014)

  • Fowler, M., Beck, K., Brant, J., Opdyke, W., Roberts, D.: In: Gamma, E. (ed.) Refactoring: improving the design of existing code. Pearson Education India, Karnataka (1999)

    Google Scholar 

  • George, B., Williams, L.: A structured experiment of test-driven development. Inf. Softw. Technol.softw. Technol. 46(5), 337–342 (2004)

    Article  Google Scholar 

  • Görg, M., Zhao, J.: Identifying semantic differences in AspectJ programs. In: 18th International Symposium on Software Testing and Analysis (ACM), Chicago (2009)

  • Goto, A., Yoshida, N., Ioka, M., Choi, E., Inoue, K.: How to extract differences from similar programs? A cohesion metric approach. In: 7th International Workshop on Software Clones (IEEE Press), San Francisco (2013)

  • Herbold, S.: Autorank: a python package for automated ranking of classifiers. J. Open Sour. Softw. 5(48), 2173 (2020)

    Article  Google Scholar 

  • Horwitz, S.: Identifying the semantic and textual differences between two versions of a program. ACM Sigplan 25(6), 234–245 (1990)

    Article  Google Scholar 

  • Hsu, R.: Method for detecting differences between graphical programs‏. U.S. Patent 5,974,254, 26 Oct 1999

  • Karac, I., Turhan, B.: What do we (really) know about test-driven development? IEEE Softw.softw. 35(4), 81–85 (2018)

    Article  Google Scholar 

  • Khanam, Z., Mohammed, N.A.: Evaluating the effectiveness of test driven development: advantages and pitfalls. Int. J. Appl. Eng. Res. 12(18), 7705–7716 (2017)

    Google Scholar 

  • Kim, M., David, N.: Discovering and representing systematic code changes. In: IEEE 31st International Conference on Software Engineering, Washington (2009)

  • Legunsen, O., August, S., Darko, M.: STARTS: STAtic regression test selection. In: 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE) (2017)

  • Linares-Vásquez, M., Cortés-Coy, L., Aponte, J., Poshyvanyk, D.: Changescribe: a tool for automatically generating commit messages. In: 37th IEEE International Conference on Software Engineering, Florence (2015)

  • Liu, C., Chen, C., Han, J., Yu, P.S.: GPLAG: detection of software plagiarism by program dependence graph analysis. In: 12th ACM SIGKDD international conference on Knowledge discovery and data mining. New York (2006)

  • Madeyski, L., Marcin, K.: Continuous test-driven development—a novel agile software development practice and supporting tool. In: Evaluation of Novel Approaches to Software Engineering (ENASE), pp. 260–267

  • Madeyski, L., Kawalerowicz, M.: Continuous test-driven development: a preliminary empirical evaluation using agile experimentation in industrial settings. Towards Synerg. Combin. Res. Pract. Softw. Eng. 733, 105–118 (2018)

    Article  Google Scholar 

  • Maletic, J.I., Collard, M.L.: Supporting source code difference analysis. In: 20th IEEE International Conference on Software Maintenance, Chicago (2004)

  • Myers, E.W.: AnO (ND) difference algorithm and its variations. Algorithmica 1(1–4), 251–266 (1986)

    Article  MathSciNet  Google Scholar 

  • Neamtiu, I., Foster, J.S., Hicks, M.: Understanding source code evolution using abstract Syntax Tree Matching. Missouri (2005)

  • Nguyen, H.A., Nguyen, T.T., Nguyen, H.V., Nguyen, T.N.: iDIFF: interaction-based program differencing tool. In: 26th IEEE/ACM International Conference on Automated Software Engineering, Washington (2011)

  • Nooraei Abadeh, M., Mirian-Hosseinabadi, S.: Delta-based regression testing: a formal framework towards model-driven regression testing. J. Softw. Evol. Process 27(12), 913–952 (2015)

    Article  Google Scholar 

  • Riebisch, M., Farooq, Q., Lehnert, S.: Model-based regression testing: process, challenges and approaches. In: Emerging Technologies for the Evolution and Maintenance of Software Models, Ilmenau, Germany, pp. 254–297. IGI Global (2012)

  • Rosero, R.H., Gómez, O.S., Rodríguez, G.: 15 Years of software regression testing techniques—a survey. Int. J. Softw. Eng. Knowl. Eng.softw. Eng. Knowl. Eng. 26(05), 675–689 (2016)

    Article  Google Scholar 

  • Rothermel, G., Harrold, M.J.: A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Methodol. (TOSEM) 6(2), 173–210 (1997)

    Article  Google Scholar 

  • Rothermel, G., Mary, J.H.: Empirical studies of a safe regression test selection technique. IEEE Trans. Softw. Eng.softw. Eng. 24(6), 401–419 (1998)

    Article  Google Scholar 

  • Santosh Singh, R., Kumar, S.: An approach for the prediction of number of software faults based on the dynamic selection of learning techniques. IEEE Trans. Reliab.reliab. 68(1), 216–236 (2018)

    Google Scholar 

  • Shen, J., Sun, X., Li, B., Yang, H., Hu, J.: On automatic summarization of what and why information in source code changes. In: 40th Annual Computer Software and Applications Conference (COMPSAC), Atlanta (2016)

  • Vokolos, F.I., Frankl, P.: Empirical evaluation of the textual differencing regression testing technique. In: IEEE International Conference on Software Maintenance (Cat. No. 98CB36272), Bethesda (1998)

  • Wang, T., Wang, K., Su, X., Ma, P.: Detection of semantically similar code. Front. Comput. Sci. 8(6), 996–1011 (2014)

    Article  MathSciNet  Google Scholar 

  • Wang, X., Pollock, L., Vijay-Shanker, K.: Automatic segmentation of method code into meaningful blocks to improve readability. In: 18th Working Conference on Reverse Engineering IEEE, Limerick (2011)

  • Wolfgang, O: "TDD Kata," 9 12 2018. [Online]. Available: https://www.programmingwithwolfgang.com/tdd-kata/. Accessed 8 May 2021

  • Yang, W.: Identifying syntactic differences between two programs. Softw. Pract. Exp. 21(7), 739–755 (1991)

    Article  Google Scholar 

  • Yoo, S., Mark, H.: Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verific. Reliab. 22(2), 67–120 (2012)

    Article  Google Scholar 

  • Zhang, L., Hao, D., Zhang, L., Rothermel, G.: Bridging the gap between the total and additional test-case prioritization strategies. In: Proceedings of the 2013 International Conference on Software Engineering, San Francisco (2013)

  • Zhu, C., Legunsen, O., Shi, A., Gligoric, M.: A framework for checking regression test selection tools. In: IEEE/ACM 41st International Conference on Software Engineering (ICSE), Montreal, QC, Canada (2019)

Download references

Author information

Authors and Affiliations

Authors

Contributions

Dear sir, Hi. We have previously submitted an article titled "RichTest: An Improved Test-Driven Development Plugin" to your journal, which was rejected. I have made some corrections to it now, which I hope will attract the opinion of the respected reviewers. Is it possible to resubmit to this journal? Manuscript number: AUSE-D-20-00114 Initial Date Submitted: 14 Oct 2020 Zohreh Mafi and Seyed-Hassan Mirian-HosseinAbadi, both developed programs. Mafi wrote the main manuscript text and Mirian-HosseinAbadi Edited the text.

Corresponding author

Correspondence to Seyed-Hassan Mirian-Hosseinabadi.

Ethics declarations

Conflict of interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

Figure 19 illustrates the Preferences window of RichTest. The developers could set block selection mode and the level of block granularity as well as enable TDD mode there.

Fig. 19
figure 19

Preferences window

Figure 20 until 26 represent RichTest views. These figures are related to selectionSort project. Figure 20 is BlockInfo View and illustrates that test case “ta009” is the only test block that is connected to the “Ca005” code block. The “Manage” key lets the developers manage the relation manually as seen in Fig. 21. As seen in this figure, the developer can link/unlink each test block to the “Ca005” code block manually.

Fig. 20
figure 20

Block info view

Fig. 21
figure 21

Manual Management Window of BlockInfo View

Figure 22 illustrates the Commit View. The developers can see the type of modification and block content. Figure 23 illustrates the Version Manager View that shows all versions of the project as well as creates a new version. Figure 24 illustrates Regression Test View. Two test cases, ‘Ta011' and 'Ta012' have been selected and run successfully (green color, means pass and red color means fail), required time and test results have been shown.

Fig. 22
figure 22

Commit view

Fig. 23
figure 23

Version manager view

Fig. 24
figure 24

Regression test view

Figure 25 shows Compare View. The developer selects the desired code block (Ca003) to see the history of its changes and then selects two commits for comparison (Commit #21 and Commit #42). Figure 26 shows the comparison result. The orang box shows the changed part from Commit #21 to Commit #42.

Fig. 25
figure 25

Compare view

Fig. 26
figure 26

Compare view result

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mafi, Z., Mirian-Hosseinabadi, SH. Regression test selection in test-driven development. Autom Softw Eng 31, 9 (2024). https://doi.org/10.1007/s10515-023-00405-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10515-023-00405-w

Keywords

Navigation