Software Quality Journal

, Volume 19, Issue 4, pp 771–799 | Cite as

TORC: test plan optimization by requirements clustering

  • Baris Güldali
  • Holger Funke
  • Stefan Sauer
  • Gregor Engels
Article

Abstract

Acceptance testing is a time-consuming task for complex software systems that have to fulfill a large number of requirements. To reduce this effort, we have developed a widely automated method for deriving test plans from requirements that are expressed in natural language. It consists of three stages: annotation, clustering, and test plan specification. The general idea is to exploit redundancies and implicit relationships in requirements specifications. Multi-viewpoint techniques based on RM-ODP (Reference Model for Open Distributed Processing) are employed for specifying the requirements. We then use linguistic analysis techniques, requirements clustering algorithms, and pattern-based requirements collection to reduce the total effort of testing against the requirements specification. In particular, we use linguistic analysis for extracting and annotating the actor, process and object of a requirements statement. During clustering, a similarity function is computed as a measure for the overlap of requirements. In the test plan specification stage, our approach provides capabilities for semi-automatically deriving test plans and acceptance criteria from the clustered informal textual requirements. Two patterns are applied to compute a suitable order of test activities. The generated test plans consist of a sequence of test steps and asserts that are executed or checked in the given order. We also present the supporting prototype tool TORC, which is available open source. For the evaluation of the approach, we have conducted a case study in the field of acceptance testing of a national electronic identification system. In summary, we report on lessons learned how linguistic analysis and clustering techniques can help testers in understanding the relations between requirements and for improving test planning.

Keywords

Linguistic analysis Requirements clustering Acceptance testing Test planning Acceptance criteria 

Notes

Acknowledgments

We thank Dr. Michael Jahnich for contributing to the former publications and Peter Winkelhane for contributing to the tool support TORC.

References

  1. Abbot, R. (1983). Program design by informal English descriptions. Communications of the ACM, 26(11), 882–894.CrossRefGoogle Scholar
  2. Al-Otaiby, T. N., AlSherif, M., Bond, W. P. (2005). Toward software requirements modularization using hierarchical clustering techniques. In Proceedings of the 43rd Annual Southeast Regional ConferenceVolume 2 (Kennesaw, Georgia, March 18–20). ACM-SE 43. ACM, New York, NY, 223–228.Google Scholar
  3. ANSI/IEEE Std 830-1984–Software Requirements Specification.Google Scholar
  4. Bradner, S. (1997). Key words for use in RFCs to indicate requirement levels. BCP 14, RFC 2119.Google Scholar
  5. CERN. (1999) European organization for nuclear research: Colt. URL: http://dsd.lbl.gov/~hoschek/colt-download.
  6. Chen, K., Zhang, W., Zhao, H., Mei, H. (2005). An approach to constructing feature models based on requirements clustering. In Proceedings of the 13th IEEE international Conference on Requirements Engineering 2005. RE. IEEE Computer Society, Washington, DC, 31–40.Google Scholar
  7. Dowty, D. W. (1979). Word meaning and montague grammatic. Berlin: Springer.Google Scholar
  8. Dustin, E., Rashka, J., & Paul, J. (1999). Automated software testing: Introduction, management and performance. Boston: Addison Wesley.Google Scholar
  9. Engels, G., Sauer, S. (2010). A meta-method for defining software engineering methods. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, Bernhard Westfechtel (Eds.), Graph transformations and model-driven engineering (LNCS, Vol. 5765, pp. 411–440). Berlin/Heidelberg: Springer.Google Scholar
  10. Fewster, M., & Graham, D. (1999). Software test automation. Boston: Addison Wesley.MATHGoogle Scholar
  11. Geisser, M., Hildenbrand, T., Riegel, N. (2007). Evaluating the applicability of requirements engineering tools for distributed software development, Working Paper 2/2007, University of Mannheim, January.Google Scholar
  12. Goldin, L. & Berry, D. M. (1994). AbstFinder: A prototype abstraction finder for natural language text for use in requirements elicitation: Design, methodology, and evaluation. In Proceedings First International Conference on Requirements Engineering. Colorado Springs, CO: IEEE Computer Society, pp. 84–93.Google Scholar
  13. Güldali, B., Funke, H., Jahnich, M., Sauer, S., Engels, G. (2009). Semi-automated test planning for eID systems by using requirements clustering. In 24th IEEE/ACM International Conference on Automated Software Engineering (ASE 2009), 16–20 November, Auckland, New Zealand, pp. 29–39.Google Scholar
  14. Güldali, B., Sauer, S., Winkelhane, P., Funke, H., Jahnich, M. (2010). Pattern-based generation of test plans for open distributed processing systems. In Proceedings of the 5th Workshop on Automation of Software Test (Cape Town, South Africa, May 03–04). AST ‘10. ACM, New York, NY, 119–126. doi:http://doi.acm.org/10.1145/1808266.1808284.
  15. HJP Consulting. (2005). GlobalTester: Framework for Testing Smart Cards, www.globaltester.org.
  16. Hsia, P., & Gupta, A. (1992). Incremental delivery using abstract data types and requirements clustering. Systems Integration, 1992. ICSI ‘92., Proceedings of the Second International Conference on, pp. 137–150, 15–18 June 1992, Morristown, NJ, USA. doi:10.1109/ICSI.1992.217275.
  17. Hsia, P. & Yaung, A. T. (1998). Another approach to system decomposition: requirements clustering. Computer Software and Applications Conference, 1988. COMPSAC 88. In Proceedings of the Twelfth International, pp. 75–82, 5–7 October.Google Scholar
  18. International Civil Aviation Organization. (2006). Doc 9303, machine readable travel documents. Part 1: Machine readable passports. Vols. 1 and 2, Sixth Edition.Google Scholar
  19. ISO/IEC 10746-1: 1998–12. Information technology—Open Distributed Processing—Reference model.Google Scholar
  20. ISO/IEC 10746-2:1996–09. Information technology—Open Distributed Processing—Reference model: Foundations.Google Scholar
  21. ISO/IEC 10746-3:1996–09. Information technology—Open Distributed Processing—Reference model: Architecture.Google Scholar
  22. ISO/IEC 10746-4: 1998–12. Information technology—Open distributed processing—Reference model: Architectural semantics.Google Scholar
  23. Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: A review. ACM Computing Surveys, 31(3), 264–323. doi:acm.org/10.1145/331499.331504.CrossRefGoogle Scholar
  24. Lehmann, E., Wegener, J. (2000). Test case design by means of the CTE XL. In Proceedings of the 8th European International Conference on Software Testing, Analysis & Review (EuroSTAR 2000), Kopenhagen, Denmark, December.Google Scholar
  25. Li, Z., Rahman, Q. A., Madhavji, N. H. (2007). An approach to requirements encapsulation with clustering. In Proceedings of Anais do WER07Workshop em Engenharia de Requisitos, Toronto, Canada, May 17–18, pp 92–96.Google Scholar
  26. Miller, G. A. (1995). Wordnet: A lexical database for English. Communications of the ACM, 38(11), 39–41. doi:acm.org/10.1145/219717.219748.CrossRefGoogle Scholar
  27. Müller, T., Black, R., Eldh, S., Graham, D., Olsen, K., Pyhäjärvi, M., Thompson, G., van Veendendal, E. (2007). Certified Tester–Foundation Level Syllabus–Version 2007, International Software Testing Qualifications Board (ISTQB), Möhrendorf, Germany.Google Scholar
  28. Object Management Group. (2007). UML Specification V2.1.1. www.omg.org/cgi-bin/doc?formal/-07-02-05.
  29. Rupp, C. (2007): Requirements engineering und management. 4. Auflage: Hanser-Verlag.Google Scholar
  30. Salger, F., Sauer, S., Engels, G., Baumann, A. (2010). Knowledge transfer in global software development–leveraging ontologies, tools and assessments. In 5th IEEE International Conference on Global Software Engineering (ICGSE 2010), pp. 336–341.Google Scholar
  31. Sneed, H. (2008). Automated requirements analysis with the text analyzer testing tool. Version: 1.3, Anecon Software Design und Beratung G.m.b.H, January.Google Scholar
  32. Stanford Lexicalized Parser v1.6.1. (2008). URL: http://nlp.stanford.edu/downloads/lex-parser.shtml.
  33. Utting, M., & Legeard, B. (2007). Practical model-based testing—A tools approach. Morgan Kaufmann Publ., Amsterdam.Google Scholar
  34. Wiggerts, T. A. (1997). Using clustering algorithms in legacy systems remodularization. In Proceedings of the Fourth Working Conference on Reverse Engineering, pp. 33–43, 6–8 October.Google Scholar
  35. Winkelhane, P. (2010). Teilautomatisierte Generierung von Testplänen aus Anforderungsspezifikationen für offene, verteilte Identifikationssysteme. Master Thesis (in German), University of Paderborn.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Baris Güldali
    • 1
  • Holger Funke
    • 2
  • Stefan Sauer
    • 1
  • Gregor Engels
    • 1
  1. 1.s-lab—Software Quality Lab, University of PaderbornPaderbornGermany
  2. 2.HJP Consulting GmbHBorchenGermany

Personalised recommendations