Advertisement

Empirical Software Engineering

, Volume 10, Issue 4, pp 437–466 | Cite as

A Characterisation Schema for Software Testing Techniques

  • Sira Vegas
  • Victor Basili
Article

Abstract

One of the major problems within the software testing area is how to get a suitable set of cases to test a software system. This set should assure maximum effectiveness with the least possible number of test cases. There are now numerous testing techniques available for generating test cases. However, many are never used, and just a few are used over and over again. Testers have little (if any) information about the available techniques, their usefulness and, generally, how suited they are to the project at hand upon, which to base their decision on which testing techniques to use. This paper presents the results of developing and evaluating an artefact (specifically, a characterisation schema) to assist with testing technique selection. When instantiated for a variety of techniques, the schema provides developers with a catalogue containing enough information for them to select the best suited techniques for a given project. This assures that the decisions they make are based on objective knowledge of the techniques rather than perceptions, suppositions and assumptions.

Keywords

Software testing testing technique selection characterisation schema 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Basili, V. R., and Rombach, H. D. 1991. Support for comprehensive reuse. Software Engineering Journal 6(5): September, 303–316.Google Scholar
  2. Beizer, B. 1990. Software Testing Techniques. 2nd edition. Boston, MA: International Thomson Computer Press.Google Scholar
  3. Bertolino, A. 2004. Guide to the Knowledge Area of Software Testing. Software Engineering Body of Knowledge, IEEE Computer Society. February. http://www.swebok.org.
  4. Birk, A. 1997. Modelling the application domains of software engineering technologies. Proceedings of the Twelfth International Conference on Automated Software Engineering (ASE). Lake Tahoe, CA, November.Google Scholar
  5. Frankl, P., and Iakounenko, O. 1998. Further empirical tudies of test effectiveness. Proceedings of the ACM SIGSOFT International Symposium on Foundations on Software Engineering. Lake Buena Vista, FL, USA, ACM, November, pp. 153–162.Google Scholar
  6. Frankl, P. G., and Weiss, S. N. 1993. An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Transactions on Software Engineering 19(8): August, 774–787.CrossRefGoogle Scholar
  7. Harrold, M. J. 2000. Testing: A roadmap. Proceedings of the 22nd International Conference on the Future of Software Engineering. Limerick, Ireland, May, pp. 63–72.Google Scholar
  8. Henninger, S. 1996. Accelerating the successful reuse of problem solving knowledge through the domain lifecycle. Proceedings of the Fourth International Conference on Software Reuse. Orlando, FL, April, pp. 124–133.Google Scholar
  9. Hutchins, M. Foster, H. Goradia, T., and Ostrand, T. 1994. Experiments on the effectiveness of dataflow—and controlflow—based test adequacy criteria. Proceedings of the 16th International Conference on Software Engineering. Sorrento, Italy: IEEE, May, pp. 191–200.Google Scholar
  10. Juristo, N., Moreno, A.M., and Vegas, S. 2004. Reviewing 25 years of testing technique experiments. Empirical Software Engineering Journal, 9(1): 7–44.CrossRefGoogle Scholar
  11. Kontio, J., Caldiera, G., and Basili, V. R. 1996. Defining factors, goals and criteria for reusable component evaluation. Proceedings of the CASCON'96 Conference. Toronto, Canada, November, pp. 12–14Google Scholar
  12. Maiden, N. A. M., and Rugg, G. 1996. ACRE: Selecting methods for requirements acquisition. Software Engineering Journal 11(3): 183–192.Google Scholar
  13. Myers, G. J. 1970. The Art of Software Testing. New York, USA: Wiley-Interscience.Google Scholar
  14. Pfleeger, S. L. 1999. Software Engineering: Theory and Practice. New Jersey, USA: Mc-Graw Hill.Google Scholar
  15. Prieto-Díaz, R. 1989. Software Reusability, Vol 1, Chapter 4. Classification of Reusable Modules, Addison-Wesley, pp. 99–123.Google Scholar
  16. RTI, 2002. The Economic Impact of Inadequate Infrastructure for Software Testing. Planning Report 02–3, National Institute of Standards and Technology. May.Google Scholar
  17. Sommerville, I. 1998. Software Engineering. 5th edition. Harlow, England: Pearson Education.Google Scholar
  18. Vegas, S. 2002. A Characterisation Schema for Selecting Software Testing Techniques. PhD Thesis, Facultad de Informática, Universidad Politécnica de Madrid. February.Google Scholar
  19. Vegas, S., Juristo, N., and Basili, V. R. 2003. A Process for identifying relevant information for a repository: a case study for testing techniques. Managing Software Engineering knowledge. Chapter 10, Springer–Verlag, Berlin, Germany, pp. 199–230.Google Scholar
  20. Weyuker, E. J. 1990. The cost of data flow testing: An empirical study. IEEE Transactions on Software Engineering 16(2): February, 121–128.CrossRefGoogle Scholar
  21. Wood, M., Roper, M., Brooks, A., and Miller, J. 1997. Comparing and combining software defect detection techniques: A replicated empirical study. Proceedings of the 6th European Software Engineering Conference. Zurich, Switzerland, September.Google Scholar

Copyright information

© Springer Science + Business Media, Inc. 2005

Authors and Affiliations

  1. 1.Facultad de InformáticaUniversidad Politécnica de MadridMadridSpain
  2. 2.Department of Computer ScienceUniversity of MarylandUSA

Personalised recommendations