Abstract
Test cases are designed in service of goals, e.g., functional correctness or performance. Unfortunately, we lack a clear understanding of how specific goal types influence test design. In this study, we explore this relationship through interviews and a survey with software developers, with a focus on identification and importance of goal types, quantitative relations between goals and tests, and personal, organizational, methodological, and technological factors.
We identify nine goal types and their importance, and perform further analysis of three—correctness, reliability, and quality. We observe that test design for correctness forms a “default” design process that is modified when pursuing other goals. For the examined goal types, test cases tend to be simple, with many tests targeting a single goal and each test focusing on 1–2 goals at a time. We observe differences in testing practices, tools, and targeted system types between goal types. In addition, we observe that test design can be influenced by organization, process, and team makeup. This study provides a foundation for future research on test design and testing goals.
Support provided by Software Center Project 30: “Aspects of Automated Testing”.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Available at https://doi.org/10.5281/zenodo.8106998.
- 2.
One survey response was discarded, as a respondent answered twice. We retained the first response from this participant.
- 3.
Job titles have been merged when similar, e.g., “software tester” and “test engineer”. The survey asked for both development and testing experience, while the interview only asked about years of development experience.
- 4.
- 5.
References
Aniche, M., Treude, C., Zaidman, A.: How developers engineer test cases: an observational study. IEEE Trans. Softw. Eng. 48(12), 4925–4946 (2022)
Barr, E., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41(5), 507–525 (2015)
Beer, A., Ramler, R.: The role of experience in software testing practice. In: 2008 34th Euromicro Conference Software Engineering and Advanced Applications, pp. 258–265 (2008)
Bentley, J.E., Bank, W., Charlotte, N.: Software testing fundamentals-concepts, roles, and terminology. In: Proceedings of SAS Conference, pp. 1–12 (2005)
Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3(2), 77–101 (2006)
Cruzes, D.S., Dyba, T.: Recommended steps for thematic synthesis in software engineering. In: 2011 International Symposium on Empirical Software Engineering and Measurement, pp. 275–284 (2011)
Eldh, S., Hansson, H., Punnekkat, S.: Analysis of mistakes as a method to improve test case design. In: 2011 Fourth IEEE International Conference on Software Testing, Verification and Validation, pp. 70–79 (2011)
Enoiu, E., Feldt, R.: Towards human-like automated test generation: perspectives from cognition and problem solving. In: 2021 IEEE/ACM 13th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE), pp. 123–124 (2021)
Enoiu, E., Tukseferi, G., Feldt, R.: Towards a model of testers’ cognitive processes: software testing as a problem solving approach. In: 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C), pp. 272–279 (2020)
Garousi, V., Zhi, J.: A survey of software testing practices in Canada. J. Syst. Softw. 86(5), 1354–1376 (2013)
Gay, G.: One-size-fits-none? Improving test generation using context-optimized fitness functions. In: 2019 IEEE/ACM 12th International Workshop on Search-Based Software Testing (SBST), pp. 3–4 (2019)
Hale, D.P., Haworth, D.A.: Towards a model of programmers’ cognitive processes in software maintenance: a structural learning theory approach for debugging. J. Softw. Maint. Res. Pract. 3(2), 85–106 (1991)
Hale, J.E., Sharpe, S., Hale, D.P.: An evaluation of the cognitive processes of programmers engaged in software debugging. J. Softw. Maint. Res. Pract. 11(2), 73–91 (1999)
Karac, I., Turhan, B.: What do we (really) know about test-driven development? IEEE Softw. 35(4), 81–85 (2018)
Linaker, J., Sulaman, S.M., Höst, M., de Mello, R.M.: Guidelines for conducting surveys in software engineering v. 1.1. Lund University (2015)
Litwin, M.S., Fink, A.: How to Measure Survey Reliability and Validity, vol. 7. Sage (1995)
McLeod, R., Jr., Everett, G.D.: Software Testing: Testing Across the Entire Software Development Life Cycle. Wiley, Hoboken (2007)
Newell, A., Simon, H.A., et al.: Human Problem Solving, vol. 104. Prentice-Hall, Englewood Cliffs (1972)
Pezze, M., Young, M.: Software Test and Analysis: Process, Principles, and Techniques. Wiley, Hoboken (2006)
Quadri, S., Farooq, S.U.: Software testing-goals, principles, and limitations. Int. J. Comput. Appl. 6(9), 1 (2010)
Runeson, P.: A survey of unit testing practices. IEEE Softw. 23(4), 22–29 (2006)
Sommerville, I.: Software Engineering, 9th edn. Addison-Wesley Publishing Company, USA (2010)
Upton, G., Cook, I.: A Dictionary of Statistics 3E. Oxford University Press (2014)
Whittaker, J.A.: Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test Design. Pearson Education (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 IFIP International Federation for Information Processing
About this paper
Cite this paper
Istanbuly, D., Zimmer, M., Gay, G. (2023). How Do Different Types of Testing Goals Affect Test Case Design?. In: Bonfanti, S., Gargantini, A., Salvaneschi, P. (eds) Testing Software and Systems. ICTSS 2023. Lecture Notes in Computer Science, vol 14131. Springer, Cham. https://doi.org/10.1007/978-3-031-43240-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-43240-8_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43239-2
Online ISBN: 978-3-031-43240-8
eBook Packages: Computer ScienceComputer Science (R0)